Test Report: Docker_Linux_containerd_arm64 22352

                    
                      9a7985111956b2877773a073c576921d0f069a2d:2025-12-28:43023
                    
                

Test fail (8/333)

x
+
TestForceSystemdFlag (501.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m16.82059088s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-257442] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-257442" primary control-plane node in "force-systemd-flag-257442" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:11:42.715378  202182 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:11:42.715558  202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:11:42.715590  202182 out.go:374] Setting ErrFile to fd 2...
	I1228 07:11:42.715612  202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:11:42.715999  202182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:11:42.716697  202182 out.go:368] Setting JSON to false
	I1228 07:11:42.718260  202182 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3253,"bootTime":1766902650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:11:42.718337  202182 start.go:143] virtualization:  
	I1228 07:11:42.722422  202182 out.go:179] * [force-systemd-flag-257442] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:11:42.725859  202182 notify.go:221] Checking for updates...
	I1228 07:11:42.726417  202182 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:11:42.729863  202182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:11:42.733034  202182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:11:42.736198  202182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:11:42.739620  202182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:11:42.742650  202182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:11:42.746164  202182 config.go:182] Loaded profile config "force-systemd-env-782848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:11:42.746308  202182 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:11:42.770870  202182 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:11:42.770972  202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:11:42.844310  202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.83443823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:11:42.844418  202182 docker.go:319] overlay module found
	I1228 07:11:42.849492  202182 out.go:179] * Using the docker driver based on user configuration
	I1228 07:11:42.852348  202182 start.go:309] selected driver: docker
	I1228 07:11:42.852368  202182 start.go:928] validating driver "docker" against <nil>
	I1228 07:11:42.852382  202182 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:11:42.853288  202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:11:42.918090  202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.898066629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:11:42.918240  202182 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:11:42.918462  202182 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:11:42.921452  202182 out.go:179] * Using Docker driver with root privileges
	I1228 07:11:42.924398  202182 cni.go:84] Creating CNI manager for ""
	I1228 07:11:42.924520  202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:11:42.924534  202182 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:11:42.924614  202182 start.go:353] cluster config:
	{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1228 07:11:42.927742  202182 out.go:179] * Starting "force-systemd-flag-257442" primary control-plane node in "force-systemd-flag-257442" cluster
	I1228 07:11:42.930570  202182 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:11:42.933508  202182 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:11:42.936360  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:42.936405  202182 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:11:42.936416  202182 cache.go:65] Caching tarball of preloaded images
	I1228 07:11:42.936441  202182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:11:42.936533  202182 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:11:42.936546  202182 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:11:42.936653  202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
	I1228 07:11:42.936673  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json: {Name:mk1bb575eaedf054a5c39231661ba5e51bfbfb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:42.955984  202182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:11:42.956009  202182 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:11:42.956029  202182 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:11:42.956060  202182 start.go:360] acquireMachinesLock for force-systemd-flag-257442: {Name:mk182766e2370865019edd04ffc6f7524c78e636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:11:42.956174  202182 start.go:364] duration metric: took 92.899µs to acquireMachinesLock for "force-systemd-flag-257442"
	I1228 07:11:42.956203  202182 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:11:42.956270  202182 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:11:42.959751  202182 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:11:42.959984  202182 start.go:159] libmachine.API.Create for "force-systemd-flag-257442" (driver="docker")
	I1228 07:11:42.960019  202182 client.go:173] LocalClient.Create starting
	I1228 07:11:42.960087  202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
	I1228 07:11:42.960128  202182 main.go:144] libmachine: Decoding PEM data...
	I1228 07:11:42.960147  202182 main.go:144] libmachine: Parsing certificate...
	I1228 07:11:42.960199  202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
	I1228 07:11:42.960227  202182 main.go:144] libmachine: Decoding PEM data...
	I1228 07:11:42.960242  202182 main.go:144] libmachine: Parsing certificate...
	I1228 07:11:42.960646  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:11:42.976005  202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:11:42.976085  202182 network_create.go:284] running [docker network inspect force-systemd-flag-257442] to gather additional debugging logs...
	I1228 07:11:42.976106  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442
	W1228 07:11:42.991634  202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 returned with exit code 1
	I1228 07:11:42.991665  202182 network_create.go:287] error running [docker network inspect force-systemd-flag-257442]: docker network inspect force-systemd-flag-257442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-257442 not found
	I1228 07:11:42.991678  202182 network_create.go:289] output of [docker network inspect force-systemd-flag-257442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-257442 not found
	
	** /stderr **
	I1228 07:11:42.991788  202182 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:11:43.009147  202182 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
	I1228 07:11:43.009450  202182 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
	I1228 07:11:43.009714  202182 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
	I1228 07:11:43.010021  202182 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-60444ab3ee70 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:84:9e:6e:bc:3d} reservation:<nil>}
	I1228 07:11:43.010405  202182 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d72f0}
	I1228 07:11:43.010426  202182 network_create.go:124] attempt to create docker network force-systemd-flag-257442 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 07:11:43.010488  202182 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-257442 force-systemd-flag-257442
	I1228 07:11:43.066640  202182 network_create.go:108] docker network force-systemd-flag-257442 192.168.85.0/24 created
	I1228 07:11:43.066670  202182 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-257442" container
	I1228 07:11:43.066751  202182 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:11:43.086978  202182 cli_runner.go:164] Run: docker volume create force-systemd-flag-257442 --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:11:43.106995  202182 oci.go:103] Successfully created a docker volume force-systemd-flag-257442
	I1228 07:11:43.107086  202182 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-257442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --entrypoint /usr/bin/test -v force-systemd-flag-257442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:11:43.672034  202182 oci.go:107] Successfully prepared a docker volume force-systemd-flag-257442
	I1228 07:11:43.672096  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:43.672107  202182 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:11:43.672194  202182 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:11:47.619647  202182 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.947413254s)
	I1228 07:11:47.619678  202182 kic.go:203] duration metric: took 3.947567208s to extract preloaded images to volume ...
	W1228 07:11:47.619829  202182 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:11:47.619942  202182 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:11:47.682992  202182 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-257442 --name force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-257442 --network force-systemd-flag-257442 --ip 192.168.85.2 --volume force-systemd-flag-257442:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:11:47.987523  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Running}}
	I1228 07:11:48.014448  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.042972  202182 cli_runner.go:164] Run: docker exec force-systemd-flag-257442 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:11:48.101378  202182 oci.go:144] the created container "force-systemd-flag-257442" has a running status.
	I1228 07:11:48.101414  202182 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa...
	I1228 07:11:48.675904  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:11:48.675956  202182 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:11:48.704271  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.736793  202182 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:11:48.736819  202182 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-257442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:11:48.804337  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.834826  202182 machine.go:94] provisionDockerMachine start ...
	I1228 07:11:48.834944  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:48.863393  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:48.863873  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:48.863893  202182 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:11:49.032380  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
	
	I1228 07:11:49.032406  202182 ubuntu.go:182] provisioning hostname "force-systemd-flag-257442"
	I1228 07:11:49.032540  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.052336  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:49.052665  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:49.052682  202182 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-257442 && echo "force-systemd-flag-257442" | sudo tee /etc/hostname
	I1228 07:11:49.213253  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
	
	I1228 07:11:49.213336  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.236648  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:49.236959  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:49.236977  202182 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-257442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-257442/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-257442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:11:49.397038  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:11:49.397065  202182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:11:49.397085  202182 ubuntu.go:190] setting up certificates
	I1228 07:11:49.397094  202182 provision.go:84] configureAuth start
	I1228 07:11:49.397159  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:49.420305  202182 provision.go:143] copyHostCerts
	I1228 07:11:49.420345  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:11:49.420374  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:11:49.420386  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:11:49.420564  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:11:49.420662  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:11:49.420680  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:11:49.420685  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:11:49.420715  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:11:49.420761  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:11:49.420776  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:11:49.420780  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:11:49.420805  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:11:49.420852  202182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-257442 san=[127.0.0.1 192.168.85.2 force-systemd-flag-257442 localhost minikube]
	I1228 07:11:49.646258  202182 provision.go:177] copyRemoteCerts
	I1228 07:11:49.646332  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:11:49.646373  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.667681  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:49.768622  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:11:49.768692  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:11:49.786043  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:11:49.786115  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:11:49.805713  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:11:49.805777  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:11:49.824117  202182 provision.go:87] duration metric: took 427.001952ms to configureAuth
	I1228 07:11:49.824142  202182 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:11:49.824330  202182 config.go:182] Loaded profile config "force-systemd-flag-257442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:11:49.824345  202182 machine.go:97] duration metric: took 989.496866ms to provisionDockerMachine
	I1228 07:11:49.824352  202182 client.go:176] duration metric: took 6.864322529s to LocalClient.Create
	I1228 07:11:49.824369  202182 start.go:167] duration metric: took 6.864385431s to libmachine.API.Create "force-systemd-flag-257442"
	I1228 07:11:49.824377  202182 start.go:293] postStartSetup for "force-systemd-flag-257442" (driver="docker")
	I1228 07:11:49.824385  202182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:11:49.824441  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:11:49.824572  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.841697  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:49.940326  202182 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:11:49.943423  202182 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:11:49.943449  202182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:11:49.943460  202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:11:49.943515  202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:11:49.943595  202182 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:11:49.943601  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /etc/ssl/certs/41952.pem
	I1228 07:11:49.943695  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:11:49.950748  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:11:49.976888  202182 start.go:296] duration metric: took 152.497114ms for postStartSetup
	I1228 07:11:49.977259  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:49.998212  202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
	I1228 07:11:49.998522  202182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:11:49.998567  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.030466  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.125879  202182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:11:50.131149  202182 start.go:128] duration metric: took 7.174863789s to createHost
	I1228 07:11:50.131177  202182 start.go:83] releasing machines lock for "force-systemd-flag-257442", held for 7.174990436s
	I1228 07:11:50.131248  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:50.148157  202182 ssh_runner.go:195] Run: cat /version.json
	I1228 07:11:50.148166  202182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:11:50.148207  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.148236  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.172404  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.174217  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.357542  202182 ssh_runner.go:195] Run: systemctl --version
	I1228 07:11:50.363928  202182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:11:50.368163  202182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:11:50.368231  202182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:11:50.395201  202182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:11:50.395227  202182 start.go:496] detecting cgroup driver to use...
	I1228 07:11:50.395241  202182 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:11:50.395299  202182 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:11:50.410474  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:11:50.423445  202182 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:11:50.423535  202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:11:50.440554  202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:11:50.458778  202182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:11:50.577463  202182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:11:50.701377  202182 docker.go:234] disabling docker service ...
	I1228 07:11:50.701466  202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:11:50.726518  202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:11:50.741501  202182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:11:50.867242  202182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:11:50.974607  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:11:50.987492  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:11:51.008605  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:11:51.019015  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:11:51.028781  202182 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:11:51.028861  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:11:51.038465  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:11:51.047159  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:11:51.055758  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:11:51.064984  202182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:11:51.072909  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:11:51.081912  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:11:51.090824  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:11:51.099899  202182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:11:51.107450  202182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:11:51.115067  202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:11:51.235548  202182 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:11:51.376438  202182 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:11:51.376630  202182 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:11:51.380725  202182 start.go:574] Will wait 60s for crictl version
	I1228 07:11:51.380800  202182 ssh_runner.go:195] Run: which crictl
	I1228 07:11:51.384409  202182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:11:51.409180  202182 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:11:51.409291  202182 ssh_runner.go:195] Run: containerd --version
	I1228 07:11:51.430646  202182 ssh_runner.go:195] Run: containerd --version
	I1228 07:11:51.460595  202182 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:11:51.463697  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:11:51.480057  202182 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:11:51.484647  202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:11:51.495570  202182 kubeadm.go:884] updating cluster {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:11:51.495689  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:51.495768  202182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:11:51.534723  202182 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:11:51.534803  202182 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:11:51.534903  202182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:11:51.559789  202182 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:11:51.559808  202182 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:11:51.559817  202182 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1228 07:11:51.559914  202182 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-257442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:11:51.559976  202182 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:11:51.585689  202182 cni.go:84] Creating CNI manager for ""
	I1228 07:11:51.585767  202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:11:51.585801  202182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:11:51.585861  202182 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-257442 NodeName:force-systemd-flag-257442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:11:51.586026  202182 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-257442"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:11:51.586150  202182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:11:51.594509  202182 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:11:51.594591  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:11:51.602306  202182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1228 07:11:51.614702  202182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:11:51.627096  202182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1228 07:11:51.639529  202182 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:11:51.643115  202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:11:51.652078  202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:11:51.778040  202182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:11:51.796591  202182 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442 for IP: 192.168.85.2
	I1228 07:11:51.796682  202182 certs.go:195] generating shared ca certs ...
	I1228 07:11:51.796718  202182 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:51.796936  202182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:11:51.797027  202182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:11:51.797064  202182 certs.go:257] generating profile certs ...
	I1228 07:11:51.797180  202182 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key
	I1228 07:11:51.797224  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt with IP's: []
	I1228 07:11:52.013074  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt ...
	I1228 07:11:52.013118  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt: {Name:mk7aed4b1361cad35efdb364bf3318878e0ba011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.013324  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key ...
	I1228 07:11:52.013339  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key: {Name:mk8ec5637167dd5ffdf85444ad06fe325864a279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.013439  202182 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be
	I1228 07:11:52.013462  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 07:11:52.367478  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be ...
	I1228 07:11:52.367511  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be: {Name:mkda9f7af1a3a08068bbee1ddd2a4b4ef4a9f820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.367692  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be ...
	I1228 07:11:52.367707  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be: {Name:mk045bbb68239d684b49be802faad160202aaf3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.367798  202182 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt
	I1228 07:11:52.367875  202182 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key
	I1228 07:11:52.367939  202182 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key
	I1228 07:11:52.367956  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt with IP's: []
	I1228 07:11:52.450774  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt ...
	I1228 07:11:52.450804  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt: {Name:mkad6c1484d2eff4419d1163b5dc950a7aeb71a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.450986  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key ...
	I1228 07:11:52.450999  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key: {Name:mk7ffb474cec5cc67e49a8a4a4b043205762d02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.451100  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:11:52.451122  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:11:52.451135  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:11:52.451157  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:11:52.451173  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:11:52.451198  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:11:52.451213  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:11:52.451224  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:11:52.451276  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:11:52.451317  202182 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:11:52.451330  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:11:52.451359  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:11:52.451383  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:11:52.451418  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:11:52.451466  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:11:52.451500  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.451519  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.451533  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem -> /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.452048  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:11:52.470544  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:11:52.489878  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:11:52.510247  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:11:52.528132  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:11:52.545968  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:11:52.563355  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:11:52.580238  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:11:52.598910  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:11:52.617614  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:11:52.636247  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:11:52.654304  202182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:11:52.667049  202182 ssh_runner.go:195] Run: openssl version
	I1228 07:11:52.673735  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.681295  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:11:52.688626  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.692403  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.692584  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.735038  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:11:52.742898  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:11:52.750466  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.758067  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:11:52.765682  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.769877  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.769968  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.810873  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:11:52.818298  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:11:52.825860  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.833320  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:11:52.840574  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.844181  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.844245  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.885195  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:11:52.893615  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
	I1228 07:11:52.900889  202182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:11:52.904539  202182 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:11:52.904638  202182 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:11:52.904749  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:11:52.915402  202182 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:11:52Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:11:52.915477  202182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:11:52.923486  202182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:11:52.931211  202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:11:52.931307  202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:11:52.939006  202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:11:52.939027  202182 kubeadm.go:158] found existing configuration files:
	
	I1228 07:11:52.939087  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:11:52.946627  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:11:52.946691  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:11:52.954506  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:11:52.963900  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:11:52.963966  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:11:52.971542  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:11:52.979414  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:11:52.979485  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:11:52.986647  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:11:52.994899  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:11:52.995009  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:11:53.003577  202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:11:53.051927  202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:11:53.056727  202182 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:11:53.128709  202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:11:53.128782  202182 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:11:53.128818  202182 kubeadm.go:319] OS: Linux
	I1228 07:11:53.128866  202182 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:11:53.128914  202182 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:11:53.128962  202182 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:11:53.129012  202182 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:11:53.129062  202182 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:11:53.129111  202182 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:11:53.129156  202182 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:11:53.129205  202182 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:11:53.129251  202182 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:11:53.196911  202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:11:53.197098  202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:11:53.197193  202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:11:53.206716  202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:11:53.210202  202182 out.go:252]   - Generating certificates and keys ...
	I1228 07:11:53.210291  202182 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:11:53.210361  202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:11:53.342406  202182 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:11:53.807332  202182 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:11:54.152653  202182 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:11:54.360536  202182 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:11:54.510375  202182 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:11:54.510779  202182 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:11:54.630196  202182 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:11:54.630431  202182 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:11:55.093747  202182 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:11:55.202960  202182 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:11:55.357297  202182 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:11:55.357650  202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:11:55.557158  202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:11:55.707761  202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:11:55.947840  202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:11:56.066861  202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:11:56.190344  202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:11:56.190993  202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:11:56.193691  202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:11:56.197563  202182 out.go:252]   - Booting up control plane ...
	I1228 07:11:56.197679  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:11:56.197771  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:11:56.197847  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:11:56.216231  202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:11:56.216354  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:11:56.223498  202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:11:56.224057  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:11:56.224309  202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:11:56.359584  202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:11:56.359704  202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:15:56.359053  202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000118942s
	I1228 07:15:56.359085  202182 kubeadm.go:319] 
	I1228 07:15:56.359144  202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:15:56.359183  202182 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:15:56.359292  202182 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:15:56.359301  202182 kubeadm.go:319] 
	I1228 07:15:56.359405  202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:15:56.359441  202182 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:15:56.359476  202182 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:15:56.359484  202182 kubeadm.go:319] 
	I1228 07:15:56.372655  202182 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:15:56.373414  202182 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:15:56.373650  202182 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:15:56.374256  202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:15:56.374302  202182 kubeadm.go:319] 
	I1228 07:15:56.374426  202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:15:56.374572  202182 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000118942s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000118942s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:15:56.374955  202182 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1228 07:15:56.816009  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:15:56.830130  202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:15:56.830189  202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:15:56.839676  202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:15:56.839743  202182 kubeadm.go:158] found existing configuration files:
	
	I1228 07:15:56.839818  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:15:56.848800  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:15:56.848913  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:15:56.858141  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:15:56.868016  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:15:56.868125  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:15:56.876557  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.886001  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:15:56.886129  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.894421  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:15:56.903733  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:15:56.903858  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:15:56.912105  202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:15:56.973760  202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:15:56.974624  202182 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:15:57.076378  202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:15:57.076579  202182 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:15:57.076651  202182 kubeadm.go:319] OS: Linux
	I1228 07:15:57.076720  202182 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:15:57.076805  202182 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:15:57.076885  202182 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:15:57.076967  202182 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:15:57.077050  202182 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:15:57.077135  202182 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:15:57.077218  202182 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:15:57.077302  202182 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:15:57.077386  202182 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:15:57.173412  202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:15:57.173584  202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:15:57.173716  202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:15:57.193049  202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:15:57.196350  202182 out.go:252]   - Generating certificates and keys ...
	I1228 07:15:57.196587  202182 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:15:57.196675  202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:15:57.196779  202182 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:15:57.197830  202182 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:15:57.198363  202182 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:15:57.198849  202182 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:15:57.199374  202182 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:15:57.199787  202182 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:15:57.200352  202182 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:15:57.200860  202182 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:15:57.201385  202182 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:15:57.201487  202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:15:57.595218  202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:15:57.831579  202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:15:58.069431  202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:15:58.608051  202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:15:58.960100  202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:15:58.960768  202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:15:58.963496  202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:15:58.967038  202182 out.go:252]   - Booting up control plane ...
	I1228 07:15:58.967133  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:15:58.967207  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:15:58.968494  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:15:58.990175  202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:15:58.990624  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:15:58.998239  202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:15:58.998885  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:15:58.998948  202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:15:59.134789  202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:15:59.134903  202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:19:59.133520  202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000148247s
	I1228 07:19:59.133544  202182 kubeadm.go:319] 
	I1228 07:19:59.133603  202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:19:59.133636  202182 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:19:59.134115  202182 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:19:59.134145  202182 kubeadm.go:319] 
	I1228 07:19:59.134503  202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:19:59.134568  202182 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:19:59.134623  202182 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:19:59.134628  202182 kubeadm.go:319] 
	I1228 07:19:59.139795  202182 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:19:59.140678  202182 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:19:59.141000  202182 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:19:59.142131  202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:19:59.142218  202182 kubeadm.go:319] 
	I1228 07:19:59.142358  202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:19:59.142429  202182 kubeadm.go:403] duration metric: took 8m6.237794878s to StartCluster
	I1228 07:19:59.142536  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.154918  202182 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.154991  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.166191  202182 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.166259  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.177549  202182 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.177619  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.188550  202182 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.188622  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.199522  202182 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.199608  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.222184  202182 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.222259  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.238199  202182 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.238223  202182 logs.go:123] Gathering logs for containerd ...
	I1228 07:19:59.238235  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1228 07:19:59.285575  202182 logs.go:123] Gathering logs for container status ...
	I1228 07:19:59.285608  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:19:59.317760  202182 logs.go:123] Gathering logs for kubelet ...
	I1228 07:19:59.317788  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:19:59.379482  202182 logs.go:123] Gathering logs for dmesg ...
	I1228 07:19:59.379521  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:19:59.397974  202182 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:19:59.398001  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:19:59.472720  202182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:19:59.462854    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.463664    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.466781    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.467148    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.468708    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:19:59.462854    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.463664    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.466781    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.467148    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.468708    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1228 07:19:59.472790  202182 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:19:59.472889  202182 out.go:285] * 
	* 
	W1228 07:19:59.472973  202182 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:19:59.472991  202182 out.go:285] * 
	* 
	W1228 07:19:59.473250  202182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:19:59.480297  202182 out.go:203] 
	W1228 07:19:59.483295  202182 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:19:59.483369  202182 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:19:59.483394  202182 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:19:59.486670  202182 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-257442 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:19:59.894768358 +0000 UTC m=+3122.546597803
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-257442
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-257442:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed",
	        "Created": "2025-12-28T07:11:47.699128468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202616,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:11:47.761413388Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/hostname",
	        "HostsPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/hosts",
	        "LogPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed-json.log",
	        "Name": "/force-systemd-flag-257442",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-257442:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-257442",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed",
	                "LowerDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-257442",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-257442/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-257442",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-257442",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-257442",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "749b572e0b759ff76bd21e42cc7c467a75cdc4cadfbd58ed0720a6113433b82b",
	            "SandboxKey": "/var/run/docker/netns/749b572e0b75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-257442": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:84:78:58:5b:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f65e8822beda8f345fed2fac182d65f1a5b5f1057db521193b1e64cea0af58c2",
	                    "EndpointID": "74a46310530e51cd6b12cd1d4107d49ae137a2581b6bf9e7941c933a0f817d14",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-257442",
	                        "df7d2cc9f5a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257442 -n force-systemd-flag-257442
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257442 -n force-systemd-flag-257442: exit status 6 (682.671844ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:20:00.565869  231453 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257442" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-257442 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p cert-options-913529                                                                                                                                                                                                                              │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ stop    │ -p old-k8s-version-251758 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
	│ image   │ old-k8s-version-251758 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ pause   │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
	│ stop    │ -p no-preload-863373 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
	│ addons  │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ image   │ no-preload-863373 image list --format=json                                                                                                                                                                                                          │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ pause   │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ unpause │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                                                                                                │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                                                                                                │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-468470        │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │                     │
	│ ssh     │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:19:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:19:15.855527  228126 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:19:15.855701  228126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:19:15.855731  228126 out.go:374] Setting ErrFile to fd 2...
	I1228 07:19:15.855753  228126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:19:15.856019  228126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:19:15.856535  228126 out.go:368] Setting JSON to false
	I1228 07:19:15.857395  228126 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3706,"bootTime":1766902650,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:19:15.857493  228126 start.go:143] virtualization:  
	I1228 07:19:15.861932  228126 out.go:179] * [embed-certs-468470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:19:15.866530  228126 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:19:15.866619  228126 notify.go:221] Checking for updates...
	I1228 07:19:15.870626  228126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:19:15.873924  228126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:19:15.877084  228126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:19:15.880271  228126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:19:15.883516  228126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:19:15.887232  228126 config.go:182] Loaded profile config "force-systemd-flag-257442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:19:15.887398  228126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:19:15.920930  228126 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:19:15.921049  228126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:19:15.993776  228126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:19:15.982927547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:19:15.993882  228126 docker.go:319] overlay module found
	I1228 07:19:15.999413  228126 out.go:179] * Using the docker driver based on user configuration
	I1228 07:19:16.002624  228126 start.go:309] selected driver: docker
	I1228 07:19:16.002656  228126 start.go:928] validating driver "docker" against <nil>
	I1228 07:19:16.002671  228126 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:19:16.003491  228126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:19:16.078708  228126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:19:16.068288874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:19:16.078855  228126 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:19:16.079069  228126 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:19:16.082125  228126 out.go:179] * Using Docker driver with root privileges
	I1228 07:19:16.085153  228126 cni.go:84] Creating CNI manager for ""
	I1228 07:19:16.085226  228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:19:16.085242  228126 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:19:16.085336  228126 start.go:353] cluster config:
	{Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:19:16.088491  228126 out.go:179] * Starting "embed-certs-468470" primary control-plane node in "embed-certs-468470" cluster
	I1228 07:19:16.091359  228126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:19:16.094407  228126 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:19:16.097395  228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:19:16.097444  228126 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:19:16.097467  228126 cache.go:65] Caching tarball of preloaded images
	I1228 07:19:16.097467  228126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:19:16.097546  228126 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:19:16.097556  228126 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:19:16.097657  228126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json ...
	I1228 07:19:16.097681  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json: {Name:mk9b95fe4d627fe34aac6746b83e81a6d6cc5dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:16.116788  228126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:19:16.116808  228126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:19:16.116825  228126 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:19:16.116855  228126 start.go:360] acquireMachinesLock for embed-certs-468470: {Name:mke430c2aaf951f831e2ac8aaeccff9516da0ba2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:19:16.116957  228126 start.go:364] duration metric: took 83.061µs to acquireMachinesLock for "embed-certs-468470"
	I1228 07:19:16.116988  228126 start.go:93] Provisioning new machine with config: &{Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:19:16.117062  228126 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:19:16.120406  228126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:19:16.120647  228126 start.go:159] libmachine.API.Create for "embed-certs-468470" (driver="docker")
	I1228 07:19:16.120685  228126 client.go:173] LocalClient.Create starting
	I1228 07:19:16.120747  228126 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
	I1228 07:19:16.120785  228126 main.go:144] libmachine: Decoding PEM data...
	I1228 07:19:16.120808  228126 main.go:144] libmachine: Parsing certificate...
	I1228 07:19:16.120867  228126 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
	I1228 07:19:16.120889  228126 main.go:144] libmachine: Decoding PEM data...
	I1228 07:19:16.120900  228126 main.go:144] libmachine: Parsing certificate...
	I1228 07:19:16.121248  228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:19:16.137735  228126 cli_runner.go:211] docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:19:16.137815  228126 network_create.go:284] running [docker network inspect embed-certs-468470] to gather additional debugging logs...
	I1228 07:19:16.137834  228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470
	W1228 07:19:16.152355  228126 cli_runner.go:211] docker network inspect embed-certs-468470 returned with exit code 1
	I1228 07:19:16.152436  228126 network_create.go:287] error running [docker network inspect embed-certs-468470]: docker network inspect embed-certs-468470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-468470 not found
	I1228 07:19:16.152502  228126 network_create.go:289] output of [docker network inspect embed-certs-468470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-468470 not found
	
	** /stderr **
	I1228 07:19:16.152603  228126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:19:16.168855  228126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
	I1228 07:19:16.169179  228126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
	I1228 07:19:16.169493  228126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
	I1228 07:19:16.169906  228126 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e94f0}
	I1228 07:19:16.169933  228126 network_create.go:124] attempt to create docker network embed-certs-468470 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1228 07:19:16.169990  228126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-468470 embed-certs-468470
	I1228 07:19:16.222201  228126 network_create.go:108] docker network embed-certs-468470 192.168.76.0/24 created
	I1228 07:19:16.222236  228126 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-468470" container
	I1228 07:19:16.222325  228126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:19:16.238622  228126 cli_runner.go:164] Run: docker volume create embed-certs-468470 --label name.minikube.sigs.k8s.io=embed-certs-468470 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:19:16.255856  228126 oci.go:103] Successfully created a docker volume embed-certs-468470
	I1228 07:19:16.255937  228126 cli_runner.go:164] Run: docker run --rm --name embed-certs-468470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468470 --entrypoint /usr/bin/test -v embed-certs-468470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:19:16.780750  228126 oci.go:107] Successfully prepared a docker volume embed-certs-468470
	I1228 07:19:16.780809  228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:19:16.780819  228126 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:19:16.780881  228126 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-468470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:19:20.632070  228126 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-468470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.851149642s)
	I1228 07:19:20.632105  228126 kic.go:203] duration metric: took 3.851281533s to extract preloaded images to volume ...
	W1228 07:19:20.632236  228126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:19:20.632354  228126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:19:20.685476  228126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-468470 --name embed-certs-468470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-468470 --network embed-certs-468470 --ip 192.168.76.2 --volume embed-certs-468470:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:19:21.004837  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Running}}
	I1228 07:19:21.025928  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:21.052728  228126 cli_runner.go:164] Run: docker exec embed-certs-468470 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:19:21.107301  228126 oci.go:144] the created container "embed-certs-468470" has a running status.
	I1228 07:19:21.107330  228126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa...
	I1228 07:19:21.206855  228126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:19:21.230585  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:21.260868  228126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:19:21.260885  228126 kic_runner.go:114] Args: [docker exec --privileged embed-certs-468470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:19:21.321021  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:21.345509  228126 machine.go:94] provisionDockerMachine start ...
	I1228 07:19:21.345596  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:21.365232  228126 main.go:144] libmachine: Using SSH client type: native
	I1228 07:19:21.366170  228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1228 07:19:21.366189  228126 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:19:21.373060  228126 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47780->127.0.0.1:33075: read: connection reset by peer
	I1228 07:19:24.507813  228126 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-468470
	
	I1228 07:19:24.507840  228126 ubuntu.go:182] provisioning hostname "embed-certs-468470"
	I1228 07:19:24.507904  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:24.525322  228126 main.go:144] libmachine: Using SSH client type: native
	I1228 07:19:24.525629  228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1228 07:19:24.525645  228126 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-468470 && echo "embed-certs-468470" | sudo tee /etc/hostname
	I1228 07:19:24.669369  228126 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-468470
	
	I1228 07:19:24.669442  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:24.686412  228126 main.go:144] libmachine: Using SSH client type: native
	I1228 07:19:24.686718  228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1228 07:19:24.686734  228126 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:19:24.821357  228126 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:19:24.821380  228126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:19:24.821406  228126 ubuntu.go:190] setting up certificates
	I1228 07:19:24.821416  228126 provision.go:84] configureAuth start
	I1228 07:19:24.821473  228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
	I1228 07:19:24.843887  228126 provision.go:143] copyHostCerts
	I1228 07:19:24.843970  228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:19:24.843983  228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:19:24.844098  228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:19:24.844210  228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:19:24.844216  228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:19:24.844250  228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:19:24.844326  228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:19:24.844341  228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:19:24.844376  228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:19:24.844446  228126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468470 san=[127.0.0.1 192.168.76.2 embed-certs-468470 localhost minikube]
	I1228 07:19:24.940479  228126 provision.go:177] copyRemoteCerts
	I1228 07:19:24.940563  228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:19:24.940654  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:24.959228  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:25.068416  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:19:25.086084  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:19:25.103550  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:19:25.120390  228126 provision.go:87] duration metric: took 298.960917ms to configureAuth
	I1228 07:19:25.120421  228126 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:19:25.120645  228126 config.go:182] Loaded profile config "embed-certs-468470": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:19:25.120661  228126 machine.go:97] duration metric: took 3.775133456s to provisionDockerMachine
	I1228 07:19:25.120669  228126 client.go:176] duration metric: took 8.999974108s to LocalClient.Create
	I1228 07:19:25.120687  228126 start.go:167] duration metric: took 9.000040488s to libmachine.API.Create "embed-certs-468470"
	I1228 07:19:25.120695  228126 start.go:293] postStartSetup for "embed-certs-468470" (driver="docker")
	I1228 07:19:25.120707  228126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:19:25.120762  228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:19:25.120805  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:25.138516  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:25.236642  228126 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:19:25.239796  228126 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:19:25.239822  228126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:19:25.239833  228126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:19:25.239885  228126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:19:25.239971  228126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:19:25.240078  228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:19:25.247453  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:19:25.264336  228126 start.go:296] duration metric: took 143.623195ms for postStartSetup
	I1228 07:19:25.264751  228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
	I1228 07:19:25.281462  228126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json ...
	I1228 07:19:25.281766  228126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:19:25.281817  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:25.298041  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:25.393255  228126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:19:25.397701  228126 start.go:128] duration metric: took 9.280616658s to createHost
	I1228 07:19:25.397773  228126 start.go:83] releasing machines lock for "embed-certs-468470", held for 9.280801939s
	I1228 07:19:25.397873  228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
	I1228 07:19:25.415714  228126 ssh_runner.go:195] Run: cat /version.json
	I1228 07:19:25.415760  228126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:19:25.415770  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:25.415815  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:25.437516  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:25.438133  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:25.532052  228126 ssh_runner.go:195] Run: systemctl --version
	I1228 07:19:25.621277  228126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:19:25.625520  228126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:19:25.625593  228126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:19:25.652436  228126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:19:25.652480  228126 start.go:496] detecting cgroup driver to use...
	I1228 07:19:25.652529  228126 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:19:25.652598  228126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:19:25.667425  228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:19:25.679845  228126 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:19:25.679953  228126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:19:25.697058  228126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:19:25.715630  228126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:19:25.841178  228126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:19:25.965301  228126 docker.go:234] disabling docker service ...
	I1228 07:19:25.965364  228126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:19:25.987529  228126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:19:26.005942  228126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:19:26.124496  228126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:19:26.232388  228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:19:26.244608  228126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:19:26.258117  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:19:26.266620  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:19:26.275111  228126 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:19:26.275232  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:19:26.284232  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:19:26.292564  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:19:26.300953  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:19:26.309434  228126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:19:26.317183  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:19:26.325692  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:19:26.334482  228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:19:26.343248  228126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:19:26.350679  228126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:19:26.357814  228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:19:26.468738  228126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:19:26.608217  228126 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:19:26.608291  228126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:19:26.612366  228126 start.go:574] Will wait 60s for crictl version
	I1228 07:19:26.612435  228126 ssh_runner.go:195] Run: which crictl
	I1228 07:19:26.615732  228126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:19:26.639804  228126 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:19:26.639879  228126 ssh_runner.go:195] Run: containerd --version
	I1228 07:19:26.658994  228126 ssh_runner.go:195] Run: containerd --version
	I1228 07:19:26.682435  228126 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:19:26.685439  228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:19:26.701161  228126 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:19:26.704989  228126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:19:26.714128  228126 kubeadm.go:884] updating cluster {Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:19:26.714240  228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:19:26.714310  228126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:19:26.745726  228126 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:19:26.745750  228126 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:19:26.745808  228126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:19:26.770424  228126 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:19:26.770445  228126 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:19:26.770453  228126 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1228 07:19:26.770593  228126 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-468470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:19:26.770663  228126 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:19:26.798279  228126 cni.go:84] Creating CNI manager for ""
	I1228 07:19:26.798304  228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:19:26.798320  228126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:19:26.798356  228126 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468470 NodeName:embed-certs-468470 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:19:26.798483  228126 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-468470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:19:26.798553  228126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:19:26.806138  228126 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:19:26.806227  228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:19:26.813664  228126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1228 07:19:26.826966  228126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:19:26.839537  228126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
	I1228 07:19:26.852166  228126 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:19:26.855759  228126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:19:26.865783  228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:19:26.974833  228126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:19:26.990153  228126 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470 for IP: 192.168.76.2
	I1228 07:19:26.990170  228126 certs.go:195] generating shared ca certs ...
	I1228 07:19:26.990185  228126 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:26.990326  228126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:19:26.990377  228126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:19:26.990385  228126 certs.go:257] generating profile certs ...
	I1228 07:19:26.990448  228126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key
	I1228 07:19:26.990466  228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt with IP's: []
	I1228 07:19:27.425811  228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt ...
	I1228 07:19:27.425845  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt: {Name:mkb79580e8540dbbfaebd8ca79c423a035a96d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.426088  228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key ...
	I1228 07:19:27.426104  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key: {Name:mk9d9bf51638090ade7e9193ee7c1bf78591647c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.426239  228126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338
	I1228 07:19:27.426260  228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1228 07:19:27.606180  228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 ...
	I1228 07:19:27.606207  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338: {Name:mka1ece8130add6a9fa45d6969188597caff796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.606385  228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338 ...
	I1228 07:19:27.606400  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338: {Name:mkd5cb9d9c7b4f9d06fef0319d1c296938643eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.606488  228126 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt
	I1228 07:19:27.606570  228126 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key
	I1228 07:19:27.606662  228126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key
	I1228 07:19:27.606681  228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt with IP's: []
	I1228 07:19:27.931376  228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt ...
	I1228 07:19:27.931405  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt: {Name:mk9e597c4c024bbac614c08ef0919f65c7022cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.931585  228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key ...
	I1228 07:19:27.931598  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key: {Name:mk1d959289b8333386c68b4dcfec6e816455d42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:27.931790  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:19:27.931835  228126 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:19:27.931850  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:19:27.931878  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:19:27.931906  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:19:27.931932  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:19:27.931984  228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:19:27.932574  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:19:27.954201  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:19:27.973843  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:19:27.993420  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:19:28.015789  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1228 07:19:28.038484  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:19:28.056187  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:19:28.073720  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:19:28.091830  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:19:28.109629  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:19:28.126776  228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:19:28.144645  228126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:19:28.157500  228126 ssh_runner.go:195] Run: openssl version
	I1228 07:19:28.163759  228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:19:28.171623  228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:19:28.179055  228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:19:28.182745  228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:19:28.182814  228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:19:28.223850  228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:19:28.231363  228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
	I1228 07:19:28.238828  228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:19:28.246253  228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:19:28.253868  228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:19:28.257637  228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:19:28.257706  228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:19:28.298516  228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:19:28.306116  228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:19:28.313539  228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:19:28.320788  228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:19:28.328332  228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:19:28.332223  228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:19:28.332287  228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:19:28.373381  228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:19:28.381041  228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:19:28.388558  228126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:19:28.392224  228126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:19:28.392276  228126 kubeadm.go:401] StartCluster: {Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:19:28.392401  228126 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:19:28.403735  228126 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:28Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:28.403820  228126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:19:28.412127  228126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:19:28.420944  228126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:19:28.421025  228126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:19:28.428847  228126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:19:28.428881  228126 kubeadm.go:158] found existing configuration files:
	
	I1228 07:19:28.428936  228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:19:28.437768  228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:19:28.437869  228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:19:28.445718  228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:19:28.453450  228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:19:28.453516  228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:19:28.461229  228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:19:28.469661  228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:19:28.469774  228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:19:28.477804  228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:19:28.486440  228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:19:28.486552  228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:19:28.494320  228126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:19:28.542158  228126 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:19:28.542218  228126 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:19:28.621508  228126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:19:28.621668  228126 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:19:28.621752  228126 kubeadm.go:319] OS: Linux
	I1228 07:19:28.621841  228126 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:19:28.621922  228126 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:19:28.622005  228126 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:19:28.622089  228126 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:19:28.622173  228126 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:19:28.622257  228126 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:19:28.622346  228126 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:19:28.622437  228126 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:19:28.622524  228126 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:19:28.705890  228126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:19:28.706005  228126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:19:28.706101  228126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:19:28.713035  228126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:19:28.719479  228126 out.go:252]   - Generating certificates and keys ...
	I1228 07:19:28.719580  228126 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:19:28.719656  228126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:19:29.007204  228126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:19:29.332150  228126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:19:29.561813  228126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:19:29.693999  228126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:19:29.980869  228126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:19:29.981292  228126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-468470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:19:30.078386  228126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:19:30.078847  228126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-468470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:19:30.436671  228126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:19:30.640145  228126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:19:30.829003  228126 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:19:30.829278  228126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:19:30.888274  228126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:19:30.964959  228126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:19:31.262278  228126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:19:31.748871  228126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:19:31.976397  228126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:19:31.976989  228126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:19:31.979621  228126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:19:31.983318  228126 out.go:252]   - Booting up control plane ...
	I1228 07:19:31.983424  228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:19:31.983502  228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:19:31.983569  228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:19:31.999836  228126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:19:31.999959  228126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:19:32.008406  228126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:19:32.011860  228126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:19:32.012130  228126 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:19:32.146133  228126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:19:32.147734  228126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:19:33.150021  228126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00232577s
	I1228 07:19:33.153484  228126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 07:19:33.153579  228126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1228 07:19:33.153890  228126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 07:19:33.153984  228126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 07:19:35.162354  228126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.008492574s
	I1228 07:19:36.735973  228126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.582474429s
	I1228 07:19:38.656042  228126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502351991s
	I1228 07:19:38.693604  228126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 07:19:38.720594  228126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 07:19:38.740108  228126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 07:19:38.740639  228126 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-468470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 07:19:38.754118  228126 kubeadm.go:319] [bootstrap-token] Using token: hmbmvb.uu6m3nlil2j14dzg
	I1228 07:19:38.757114  228126 out.go:252]   - Configuring RBAC rules ...
	I1228 07:19:38.757239  228126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 07:19:38.762445  228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 07:19:38.773192  228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 07:19:38.784104  228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 07:19:38.789139  228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 07:19:38.794519  228126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 07:19:39.065357  228126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 07:19:39.494566  228126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 07:19:40.063254  228126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 07:19:40.064622  228126 kubeadm.go:319] 
	I1228 07:19:40.064695  228126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 07:19:40.064701  228126 kubeadm.go:319] 
	I1228 07:19:40.064799  228126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 07:19:40.064820  228126 kubeadm.go:319] 
	I1228 07:19:40.064847  228126 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 07:19:40.064910  228126 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 07:19:40.064969  228126 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 07:19:40.064975  228126 kubeadm.go:319] 
	I1228 07:19:40.065029  228126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 07:19:40.065033  228126 kubeadm.go:319] 
	I1228 07:19:40.065081  228126 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 07:19:40.065085  228126 kubeadm.go:319] 
	I1228 07:19:40.065137  228126 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 07:19:40.065219  228126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 07:19:40.065287  228126 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 07:19:40.065291  228126 kubeadm.go:319] 
	I1228 07:19:40.065376  228126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 07:19:40.065457  228126 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 07:19:40.065462  228126 kubeadm.go:319] 
	I1228 07:19:40.065547  228126 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hmbmvb.uu6m3nlil2j14dzg \
	I1228 07:19:40.065665  228126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:245ab1e37d24b07cc412580775e400c938559b58f26292f1d84f87371e4e4a5f \
	I1228 07:19:40.065686  228126 kubeadm.go:319] 	--control-plane 
	I1228 07:19:40.065690  228126 kubeadm.go:319] 
	I1228 07:19:40.065774  228126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 07:19:40.065778  228126 kubeadm.go:319] 
	I1228 07:19:40.065861  228126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hmbmvb.uu6m3nlil2j14dzg \
	I1228 07:19:40.065963  228126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:245ab1e37d24b07cc412580775e400c938559b58f26292f1d84f87371e4e4a5f 
	I1228 07:19:40.070135  228126 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:19:40.070585  228126 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:19:40.070702  228126 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:19:40.070723  228126 cni.go:84] Creating CNI manager for ""
	I1228 07:19:40.070737  228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:19:40.073889  228126 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1228 07:19:40.076807  228126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 07:19:40.081116  228126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 07:19:40.081136  228126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 07:19:40.094421  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 07:19:40.388171  228126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 07:19:40.388303  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:40.388385  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-468470 minikube.k8s.io/updated_at=2025_12_28T07_19_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=embed-certs-468470 minikube.k8s.io/primary=true
	I1228 07:19:40.542163  228126 ops.go:34] apiserver oom_adj: -16
	I1228 07:19:40.542294  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:41.043207  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:41.542984  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:42.042481  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:42.542394  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:43.042699  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:43.542658  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:44.043094  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:44.542606  228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 07:19:44.651612  228126 kubeadm.go:1114] duration metric: took 4.263355456s to wait for elevateKubeSystemPrivileges
	I1228 07:19:44.651643  228126 kubeadm.go:403] duration metric: took 16.259370963s to StartCluster
	I1228 07:19:44.651673  228126 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:44.651733  228126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:19:44.652769  228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:19:44.652992  228126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:19:44.653096  228126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 07:19:44.653366  228126 config.go:182] Loaded profile config "embed-certs-468470": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:19:44.653413  228126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:19:44.653474  228126 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-468470"
	I1228 07:19:44.653490  228126 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-468470"
	I1228 07:19:44.653517  228126 host.go:66] Checking if "embed-certs-468470" exists ...
	I1228 07:19:44.654003  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:44.654328  228126 addons.go:70] Setting default-storageclass=true in profile "embed-certs-468470"
	I1228 07:19:44.654361  228126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-468470"
	I1228 07:19:44.654643  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:44.656956  228126 out.go:179] * Verifying Kubernetes components...
	I1228 07:19:44.670552  228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:19:44.678677  228126 addons.go:239] Setting addon default-storageclass=true in "embed-certs-468470"
	I1228 07:19:44.678725  228126 host.go:66] Checking if "embed-certs-468470" exists ...
	I1228 07:19:44.679154  228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
	I1228 07:19:44.711673  228126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:19:44.711692  228126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:19:44.711762  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:44.712322  228126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:19:44.718442  228126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:19:44.718466  228126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:19:44.718527  228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
	I1228 07:19:44.741219  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:44.756182  228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
	I1228 07:19:45.009747  228126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 07:19:45.026127  228126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:19:45.038234  228126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:19:45.054117  228126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:19:45.691432  228126 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1228 07:19:45.693665  228126 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468470" to be "Ready" ...
	I1228 07:19:46.056952  228126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002728114s)
	I1228 07:19:46.060447  228126 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1228 07:19:46.062691  228126 addons.go:530] duration metric: took 1.409273436s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1228 07:19:46.198573  228126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-468470" context rescaled to 1 replicas
	W1228 07:19:47.696627  228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
	W1228 07:19:49.697468  228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
	W1228 07:19:52.196267  228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
	W1228 07:19:54.696228  228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
	I1228 07:19:59.133520  202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000148247s
	I1228 07:19:59.133544  202182 kubeadm.go:319] 
	I1228 07:19:59.133603  202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:19:59.133636  202182 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:19:59.134115  202182 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:19:59.134145  202182 kubeadm.go:319] 
	I1228 07:19:59.134503  202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:19:59.134568  202182 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:19:59.134623  202182 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:19:59.134628  202182 kubeadm.go:319] 
	I1228 07:19:59.139795  202182 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:19:59.140678  202182 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:19:59.141000  202182 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:19:59.142131  202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:19:59.142218  202182 kubeadm.go:319] 
	I1228 07:19:59.142358  202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:19:59.142429  202182 kubeadm.go:403] duration metric: took 8m6.237794878s to StartCluster
	I1228 07:19:59.142536  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.154918  202182 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.154991  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.166191  202182 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.166259  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.177549  202182 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.177619  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.188550  202182 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.188622  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.199522  202182 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.199608  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.222184  202182 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.222259  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:19:59.238199  202182 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:19:59.238223  202182 logs.go:123] Gathering logs for containerd ...
	I1228 07:19:59.238235  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1228 07:19:59.285575  202182 logs.go:123] Gathering logs for container status ...
	I1228 07:19:59.285608  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:19:59.317760  202182 logs.go:123] Gathering logs for kubelet ...
	I1228 07:19:59.317788  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:19:59.379482  202182 logs.go:123] Gathering logs for dmesg ...
	I1228 07:19:59.379521  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:19:59.397974  202182 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:19:59.398001  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:19:59.472720  202182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:19:59.462854    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.463664    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.466781    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.467148    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.468708    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:19:59.462854    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.463664    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.466781    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.467148    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:19:59.468708    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1228 07:19:59.472790  202182 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:19:59.472889  202182 out.go:285] * 
	W1228 07:19:59.472973  202182 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:19:59.472991  202182 out.go:285] * 
	W1228 07:19:59.473250  202182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:19:59.480297  202182 out.go:203] 
	W1228 07:19:59.483295  202182 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000148247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:19:59.483369  202182 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:19:59.483394  202182 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:19:59.486670  202182 out.go:203] 
	W1228 07:19:56.697226  228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
	I1228 07:19:57.696380  228126 node_ready.go:49] node "embed-certs-468470" is "Ready"
	I1228 07:19:57.696409  228126 node_ready.go:38] duration metric: took 12.00272085s for node "embed-certs-468470" to be "Ready" ...
	I1228 07:19:57.696422  228126 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:19:57.696498  228126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:19:57.708529  228126 api_server.go:72] duration metric: took 13.055498336s to wait for apiserver process to appear ...
	I1228 07:19:57.708556  228126 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:19:57.708576  228126 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:19:57.716881  228126 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:19:57.718118  228126 api_server.go:141] control plane version: v1.35.0
	I1228 07:19:57.718142  228126 api_server.go:131] duration metric: took 9.579269ms to wait for apiserver health ...
	I1228 07:19:57.718152  228126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:19:57.721027  228126 system_pods.go:59] 8 kube-system pods found
	I1228 07:19:57.721069  228126 system_pods.go:61] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:19:57.721076  228126 system_pods.go:61] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:57.721082  228126 system_pods.go:61] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:57.721087  228126 system_pods.go:61] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:57.721098  228126 system_pods.go:61] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:57.721102  228126 system_pods.go:61] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:57.721111  228126 system_pods.go:61] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:57.721117  228126 system_pods.go:61] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:19:57.721131  228126 system_pods.go:74] duration metric: took 2.972933ms to wait for pod list to return data ...
	I1228 07:19:57.721138  228126 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:19:57.723530  228126 default_sa.go:45] found service account: "default"
	I1228 07:19:57.723555  228126 default_sa.go:55] duration metric: took 2.41118ms for default service account to be created ...
	I1228 07:19:57.723565  228126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:19:57.726275  228126 system_pods.go:86] 8 kube-system pods found
	I1228 07:19:57.726311  228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:19:57.726319  228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:57.726326  228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:57.726341  228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:57.726347  228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:57.726358  228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:57.726367  228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:57.726378  228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:19:57.726402  228126 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 07:19:57.925706  228126 system_pods.go:86] 8 kube-system pods found
	I1228 07:19:57.925747  228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:19:57.925755  228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:57.925762  228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:57.925767  228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:57.925773  228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:57.925777  228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:57.925784  228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:57.925796  228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:19:58.292842  228126 system_pods.go:86] 8 kube-system pods found
	I1228 07:19:58.292878  228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:19:58.292886  228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:58.292892  228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:58.292899  228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:58.292904  228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:58.292909  228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:58.292916  228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:58.292928  228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:19:58.632221  228126 system_pods.go:86] 8 kube-system pods found
	I1228 07:19:58.632280  228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:19:58.632292  228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:58.632310  228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:58.632329  228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:58.632339  228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:58.632355  228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:58.632371  228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:58.632391  228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:19:59.022266  228126 system_pods.go:86] 8 kube-system pods found
	I1228 07:19:59.022302  228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Running
	I1228 07:19:59.022309  228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
	I1228 07:19:59.022315  228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
	I1228 07:19:59.022320  228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
	I1228 07:19:59.022326  228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
	I1228 07:19:59.022340  228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
	I1228 07:19:59.022348  228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:19:59.022364  228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Running
	I1228 07:19:59.022377  228126 system_pods.go:126] duration metric: took 1.298805695s to wait for k8s-apps to be running ...
	I1228 07:19:59.022385  228126 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:19:59.022444  228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:19:59.035825  228126 system_svc.go:56] duration metric: took 13.431751ms WaitForService to wait for kubelet
	I1228 07:19:59.035905  228126 kubeadm.go:587] duration metric: took 14.382878883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:19:59.035947  228126 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:19:59.039570  228126 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:19:59.039606  228126 node_conditions.go:123] node cpu capacity is 2
	I1228 07:19:59.039626  228126 node_conditions.go:105] duration metric: took 3.643577ms to run NodePressure ...
	I1228 07:19:59.039658  228126 start.go:242] waiting for startup goroutines ...
	I1228 07:19:59.039677  228126 start.go:247] waiting for cluster config update ...
	I1228 07:19:59.039688  228126 start.go:256] writing updated cluster config ...
	I1228 07:19:59.039986  228126 ssh_runner.go:195] Run: rm -f paused
	I1228 07:19:59.043553  228126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:19:59.047028  228126 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p9hf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.051181  228126 pod_ready.go:94] pod "coredns-7d764666f9-p9hf5" is "Ready"
	I1228 07:19:59.051256  228126 pod_ready.go:86] duration metric: took 4.196853ms for pod "coredns-7d764666f9-p9hf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.053736  228126 pod_ready.go:83] waiting for pod "etcd-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.058200  228126 pod_ready.go:94] pod "etcd-embed-certs-468470" is "Ready"
	I1228 07:19:59.058227  228126 pod_ready.go:86] duration metric: took 4.463711ms for pod "etcd-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.060566  228126 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.072788  228126 pod_ready.go:94] pod "kube-apiserver-embed-certs-468470" is "Ready"
	I1228 07:19:59.072815  228126 pod_ready.go:86] duration metric: took 12.228105ms for pod "kube-apiserver-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.075450  228126 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.447893  228126 pod_ready.go:94] pod "kube-controller-manager-embed-certs-468470" is "Ready"
	I1228 07:19:59.447917  228126 pod_ready.go:86] duration metric: took 372.444355ms for pod "kube-controller-manager-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:19:59.648394  228126 pod_ready.go:83] waiting for pod "kube-proxy-r6p5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:20:00.086250  228126 pod_ready.go:94] pod "kube-proxy-r6p5h" is "Ready"
	I1228 07:20:00.086278  228126 pod_ready.go:86] duration metric: took 437.856104ms for pod "kube-proxy-r6p5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:20:00.302872  228126 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:20:00.648355  228126 pod_ready.go:94] pod "kube-scheduler-embed-certs-468470" is "Ready"
	I1228 07:20:00.648390  228126 pod_ready.go:86] duration metric: took 345.489801ms for pod "kube-scheduler-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:20:00.648438  228126 pod_ready.go:40] duration metric: took 1.604852026s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:20:00.720223  228126 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1228 07:20:00.723439  228126 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318525688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318592888Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318697382Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318767077Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318826573Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318905154Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318978828Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319039005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319099625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319184681Z" level=info msg="Connect containerd service"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319572319Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.320220038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.334422500Z" level=info msg="Start subscribing containerd event"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.334531367Z" level=info msg="Start recovering state"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.335224994Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.335440044Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373191378Z" level=info msg="Start event monitor"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373254024Z" level=info msg="Start cni network conf syncer for default"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373264839Z" level=info msg="Start streaming server"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373273881Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373283095Z" level=info msg="runtime interface starting up..."
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373290103Z" level=info msg="starting plugins..."
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373321036Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 07:11:51 force-systemd-flag-257442 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.375388370Z" level=info msg="containerd successfully booted in 0.084510s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:20:01.302796    4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:20:01.303556    4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:20:01.305353    4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:20:01.305933    4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:20:01.307761    4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:20:01 up  1:02,  0 user,  load average: 1.38, 1.59, 1.72
	Linux force-systemd-flag-257442 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:19:58 force-systemd-flag-257442 kubelet[4723]: E1228 07:19:58.760376    4723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:19:59 force-systemd-flag-257442 kubelet[4795]: E1228 07:19:59.564120    4795 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:20:00 force-systemd-flag-257442 kubelet[4808]: E1228 07:20:00.419936    4808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:20:01 force-systemd-flag-257442 kubelet[4895]: E1228 07:20:01.291107    4895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:20:01.034938  231528 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.049275  231528 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.062724  231528 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.075850  231528 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.089281  231528 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.103631  231528 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:20:01.116310  231528 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257442 -n force-systemd-flag-257442
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257442 -n force-systemd-flag-257442: exit status 6 (404.837611ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:20:01.891695  231739 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257442" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-257442" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-257442" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-257442
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-257442: (2.068256005s)
--- FAIL: TestForceSystemdFlag (501.30s)

                                                
                                    
x
+
TestForceSystemdEnv (507.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-782848 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-782848 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m23.821387082s)

                                                
                                                
-- stdout --
	* [force-systemd-env-782848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-782848" primary control-plane node in "force-systemd-env-782848" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:05:22.711836  181774 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:22.712023  181774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:22.712044  181774 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:22.712062  181774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:22.712327  181774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:05:22.712761  181774 out.go:368] Setting JSON to false
	I1228 07:05:22.713613  181774 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2873,"bootTime":1766902650,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:05:22.713701  181774 start.go:143] virtualization:  
	I1228 07:05:22.718532  181774 out.go:179] * [force-systemd-env-782848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:05:22.722842  181774 notify.go:221] Checking for updates...
	I1228 07:05:22.724213  181774 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:22.729272  181774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:22.732542  181774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:05:22.735857  181774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:05:22.739807  181774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:05:22.743671  181774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1228 07:05:22.747334  181774 config.go:182] Loaded profile config "test-preload-118685": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:22.747481  181774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:22.786668  181774 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:05:22.786781  181774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:22.880663  181774 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-28 07:05:22.871230676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:05:22.880762  181774 docker.go:319] overlay module found
	I1228 07:05:22.886750  181774 out.go:179] * Using the docker driver based on user configuration
	I1228 07:05:22.890153  181774 start.go:309] selected driver: docker
	I1228 07:05:22.890175  181774 start.go:928] validating driver "docker" against <nil>
	I1228 07:05:22.890203  181774 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:05:22.890863  181774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:22.963802  181774 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-28 07:05:22.954883947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:05:22.963953  181774 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:05:22.964178  181774 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:05:22.967612  181774 out.go:179] * Using Docker driver with root privileges
	I1228 07:05:22.970587  181774 cni.go:84] Creating CNI manager for ""
	I1228 07:05:22.970654  181774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:22.970671  181774 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:05:22.970751  181774 start.go:353] cluster config:
	{Name:force-systemd-env-782848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-782848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:22.973895  181774 out.go:179] * Starting "force-systemd-env-782848" primary control-plane node in "force-systemd-env-782848" cluster
	I1228 07:05:22.976776  181774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:05:22.979799  181774 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:05:22.982708  181774 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:22.982759  181774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:05:22.982794  181774 cache.go:65] Caching tarball of preloaded images
	I1228 07:05:22.982882  181774 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:05:22.982896  181774 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:05:22.983009  181774 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/config.json ...
	I1228 07:05:22.983032  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/config.json: {Name:mk5d4cae3c06232f532ff6cacfb937d72d9e555d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:22.983191  181774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:05:23.005265  181774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:05:23.005338  181774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:05:23.005377  181774 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:05:23.005433  181774 start.go:360] acquireMachinesLock for force-systemd-env-782848: {Name:mk90cc32ba397a73b1bc586ca6e87360b68af3f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:05:23.005578  181774 start.go:364] duration metric: took 111.854µs to acquireMachinesLock for "force-systemd-env-782848"
	I1228 07:05:23.005641  181774 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-782848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-782848 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:05:23.005733  181774 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:05:23.011150  181774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:05:23.011432  181774 start.go:159] libmachine.API.Create for "force-systemd-env-782848" (driver="docker")
	I1228 07:05:23.011497  181774 client.go:173] LocalClient.Create starting
	I1228 07:05:23.011610  181774 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
	I1228 07:05:23.011674  181774 main.go:144] libmachine: Decoding PEM data...
	I1228 07:05:23.011705  181774 main.go:144] libmachine: Parsing certificate...
	I1228 07:05:23.011774  181774 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
	I1228 07:05:23.011820  181774 main.go:144] libmachine: Decoding PEM data...
	I1228 07:05:23.011853  181774 main.go:144] libmachine: Parsing certificate...
	I1228 07:05:23.012282  181774 cli_runner.go:164] Run: docker network inspect force-systemd-env-782848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:05:23.030124  181774 cli_runner.go:211] docker network inspect force-systemd-env-782848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:05:23.030223  181774 network_create.go:284] running [docker network inspect force-systemd-env-782848] to gather additional debugging logs...
	I1228 07:05:23.030240  181774 cli_runner.go:164] Run: docker network inspect force-systemd-env-782848
	W1228 07:05:23.050406  181774 cli_runner.go:211] docker network inspect force-systemd-env-782848 returned with exit code 1
	I1228 07:05:23.050441  181774 network_create.go:287] error running [docker network inspect force-systemd-env-782848]: docker network inspect force-systemd-env-782848: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-782848 not found
	I1228 07:05:23.050466  181774 network_create.go:289] output of [docker network inspect force-systemd-env-782848]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-782848 not found
	
	** /stderr **
	I1228 07:05:23.050563  181774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:05:23.067159  181774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
	I1228 07:05:23.067435  181774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
	I1228 07:05:23.067700  181774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
	I1228 07:05:23.068064  181774 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cf7c0}
	I1228 07:05:23.068080  181774 network_create.go:124] attempt to create docker network force-systemd-env-782848 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1228 07:05:23.068136  181774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-782848 force-systemd-env-782848
	I1228 07:05:23.150366  181774 network_create.go:108] docker network force-systemd-env-782848 192.168.76.0/24 created
	I1228 07:05:23.150394  181774 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-782848" container
	I1228 07:05:23.150501  181774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:05:23.178136  181774 cli_runner.go:164] Run: docker volume create force-systemd-env-782848 --label name.minikube.sigs.k8s.io=force-systemd-env-782848 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:05:23.202401  181774 oci.go:103] Successfully created a docker volume force-systemd-env-782848
	I1228 07:05:23.202494  181774 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-782848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-782848 --entrypoint /usr/bin/test -v force-systemd-env-782848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:05:23.979730  181774 oci.go:107] Successfully prepared a docker volume force-systemd-env-782848
	I1228 07:05:23.979798  181774 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:23.979814  181774 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:05:23.979920  181774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-782848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:05:30.196155  181774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-782848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.216194884s)
	I1228 07:05:30.196183  181774 kic.go:203] duration metric: took 6.21636721s to extract preloaded images to volume ...
	W1228 07:05:30.196317  181774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:05:30.196418  181774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:05:30.281019  181774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-782848 --name force-systemd-env-782848 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-782848 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-782848 --network force-systemd-env-782848 --ip 192.168.76.2 --volume force-systemd-env-782848:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:05:30.637163  181774 cli_runner.go:164] Run: docker container inspect force-systemd-env-782848 --format={{.State.Running}}
	I1228 07:05:30.662752  181774 cli_runner.go:164] Run: docker container inspect force-systemd-env-782848 --format={{.State.Status}}
	I1228 07:05:30.685594  181774 cli_runner.go:164] Run: docker exec force-systemd-env-782848 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:05:30.744560  181774 oci.go:144] the created container "force-systemd-env-782848" has a running status.
	I1228 07:05:30.744595  181774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa...
	I1228 07:05:31.040451  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:05:31.040917  181774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:05:31.073090  181774 cli_runner.go:164] Run: docker container inspect force-systemd-env-782848 --format={{.State.Status}}
	I1228 07:05:31.097938  181774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:05:31.097956  181774 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-782848 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:05:31.159542  181774 cli_runner.go:164] Run: docker container inspect force-systemd-env-782848 --format={{.State.Status}}
	I1228 07:05:31.184878  181774 machine.go:94] provisionDockerMachine start ...
	I1228 07:05:31.184969  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:31.204817  181774 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:31.205158  181774 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33015 <nil> <nil>}
	I1228 07:05:31.205168  181774 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:05:31.205707  181774 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53326->127.0.0.1:33015: read: connection reset by peer
	I1228 07:05:34.344346  181774 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-782848
	
	I1228 07:05:34.344368  181774 ubuntu.go:182] provisioning hostname "force-systemd-env-782848"
	I1228 07:05:34.344443  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:34.366857  181774 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:34.367251  181774 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33015 <nil> <nil>}
	I1228 07:05:34.367266  181774 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-782848 && echo "force-systemd-env-782848" | sudo tee /etc/hostname
	I1228 07:05:34.539496  181774 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-782848
	
	I1228 07:05:34.539638  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:34.565944  181774 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:34.566264  181774 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33015 <nil> <nil>}
	I1228 07:05:34.566280  181774 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-782848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-782848/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-782848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:05:34.713061  181774 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:05:34.713136  181774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:05:34.713182  181774 ubuntu.go:190] setting up certificates
	I1228 07:05:34.713220  181774 provision.go:84] configureAuth start
	I1228 07:05:34.713309  181774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-782848
	I1228 07:05:34.736393  181774 provision.go:143] copyHostCerts
	I1228 07:05:34.736430  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:05:34.736488  181774 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:05:34.736496  181774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:05:34.736572  181774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:05:34.736655  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:05:34.736672  181774 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:05:34.736676  181774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:05:34.736702  181774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:05:34.736742  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:05:34.736761  181774 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:05:34.736765  181774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:05:34.736788  181774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:05:34.736832  181774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-782848 san=[127.0.0.1 192.168.76.2 force-systemd-env-782848 localhost minikube]
	I1228 07:05:35.244919  181774 provision.go:177] copyRemoteCerts
	I1228 07:05:35.244989  181774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:05:35.245051  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:35.264008  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33015 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa Username:docker}
	I1228 07:05:35.363425  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:05:35.363499  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:05:35.394863  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:05:35.394986  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1228 07:05:35.421647  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:05:35.421771  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 07:05:35.450307  181774 provision.go:87] duration metric: took 737.057386ms to configureAuth
	I1228 07:05:35.450384  181774 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:05:35.450611  181774 config.go:182] Loaded profile config "force-systemd-env-782848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:05:35.450640  181774 machine.go:97] duration metric: took 4.26574485s to provisionDockerMachine
	I1228 07:05:35.450660  181774 client.go:176] duration metric: took 12.439142945s to LocalClient.Create
	I1228 07:05:35.450708  181774 start.go:167] duration metric: took 12.439277437s to libmachine.API.Create "force-systemd-env-782848"
	I1228 07:05:35.450736  181774 start.go:293] postStartSetup for "force-systemd-env-782848" (driver="docker")
	I1228 07:05:35.450774  181774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:05:35.450863  181774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:05:35.450940  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:35.474350  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33015 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa Username:docker}
	I1228 07:05:35.577442  181774 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:05:35.581210  181774 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:05:35.581248  181774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:05:35.581259  181774 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:05:35.581317  181774 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:05:35.581398  181774 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:05:35.581412  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /etc/ssl/certs/41952.pem
	I1228 07:05:35.581512  181774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:05:35.589510  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:05:35.609349  181774 start.go:296] duration metric: took 158.583741ms for postStartSetup
	I1228 07:05:35.609733  181774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-782848
	I1228 07:05:35.632214  181774 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/config.json ...
	I1228 07:05:35.632554  181774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:05:35.632611  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:35.655499  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33015 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa Username:docker}
	I1228 07:05:35.756826  181774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:05:35.761647  181774 start.go:128] duration metric: took 12.755882518s to createHost
	I1228 07:05:35.761670  181774 start.go:83] releasing machines lock for "force-systemd-env-782848", held for 12.756058766s
	I1228 07:05:35.761751  181774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-782848
	I1228 07:05:35.779814  181774 ssh_runner.go:195] Run: cat /version.json
	I1228 07:05:35.779865  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:35.780122  181774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:05:35.780176  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-782848
	I1228 07:05:35.808376  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33015 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa Username:docker}
	I1228 07:05:35.818001  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33015 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-env-782848/id_rsa Username:docker}
	I1228 07:05:36.009949  181774 ssh_runner.go:195] Run: systemctl --version
	I1228 07:05:36.017122  181774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:05:36.022561  181774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:05:36.022687  181774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:05:36.055670  181774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:05:36.055695  181774 start.go:496] detecting cgroup driver to use...
	I1228 07:05:36.055713  181774 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:05:36.055768  181774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:05:36.074140  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:05:36.091059  181774 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:05:36.091207  181774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:05:36.113455  181774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:05:36.134004  181774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:05:36.256879  181774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:05:36.373234  181774 docker.go:234] disabling docker service ...
	I1228 07:05:36.373330  181774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:05:36.394864  181774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:05:36.408869  181774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:05:36.522216  181774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:05:36.687223  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:05:36.708880  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:05:36.730271  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:05:36.748202  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:05:36.758159  181774 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:05:36.758290  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:05:36.779480  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:36.790643  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:05:36.806000  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:05:36.818062  181774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:05:36.832857  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:05:36.850378  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:05:36.868163  181774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:05:36.879067  181774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:05:36.891895  181774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:05:36.900412  181774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:37.076872  181774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:05:37.289553  181774 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:05:37.289703  181774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:05:37.304778  181774 start.go:574] Will wait 60s for crictl version
	I1228 07:05:37.304907  181774 ssh_runner.go:195] Run: which crictl
	I1228 07:05:37.309288  181774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:05:37.375194  181774 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:05:37.375317  181774 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:37.419064  181774 ssh_runner.go:195] Run: containerd --version
	I1228 07:05:37.458169  181774 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:05:37.461163  181774 cli_runner.go:164] Run: docker network inspect force-systemd-env-782848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:05:37.494109  181774 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:05:37.500383  181774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:37.521304  181774 kubeadm.go:884] updating cluster {Name:force-systemd-env-782848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-782848 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:05:37.521444  181774 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:05:37.521516  181774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:37.559965  181774 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:37.559992  181774 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:05:37.560049  181774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:05:37.624489  181774 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:05:37.624515  181774 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:05:37.624523  181774 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1228 07:05:37.624611  181774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-782848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-782848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:05:37.624678  181774 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:05:37.678057  181774 cni.go:84] Creating CNI manager for ""
	I1228 07:05:37.678084  181774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:05:37.678099  181774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:05:37.678123  181774 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-782848 NodeName:force-systemd-env-782848 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:05:37.678245  181774 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-782848"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:05:37.678314  181774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:05:37.691158  181774 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:05:37.691295  181774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:05:37.705018  181774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1228 07:05:37.730046  181774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:05:37.753513  181774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1228 07:05:37.779768  181774 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:05:37.783606  181774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:05:37.803133  181774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:05:38.015860  181774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:05:38.040532  181774 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848 for IP: 192.168.76.2
	I1228 07:05:38.040562  181774 certs.go:195] generating shared ca certs ...
	I1228 07:05:38.040579  181774 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:38.040752  181774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:05:38.040810  181774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:05:38.040823  181774 certs.go:257] generating profile certs ...
	I1228 07:05:38.040889  181774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.key
	I1228 07:05:38.040905  181774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.crt with IP's: []
	I1228 07:05:38.318296  181774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.crt ...
	I1228 07:05:38.318326  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.crt: {Name:mk340ce5ea5d812703f8744f74792a9931afe473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:38.318520  181774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.key ...
	I1228 07:05:38.318536  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/client.key: {Name:mkaf6caa754b71f9746d7d08f7b2e9ae314fc1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:38.318658  181774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key.369f3c31
	I1228 07:05:38.318687  181774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt.369f3c31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1228 07:05:38.642950  181774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt.369f3c31 ...
	I1228 07:05:38.642982  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt.369f3c31: {Name:mk34f45359dc5e87b495306720f89964bcb84c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:38.643155  181774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key.369f3c31 ...
	I1228 07:05:38.643171  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key.369f3c31: {Name:mkc4532183a71b8b9b92901c45759532d6ee9f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:38.643244  181774 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt.369f3c31 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt
	I1228 07:05:38.643326  181774 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key.369f3c31 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key
	I1228 07:05:38.643388  181774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.key
	I1228 07:05:38.643405  181774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.crt with IP's: []
	I1228 07:05:39.082495  181774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.crt ...
	I1228 07:05:39.082525  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.crt: {Name:mkc0f8a470c5bed4bd56822196050ce8bafee635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:39.082723  181774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.key ...
	I1228 07:05:39.082739  181774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.key: {Name:mk72a2bf7b1ad021bb5db2ba0bacd82b8d7cf81a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:39.082813  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:05:39.082842  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:05:39.082855  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:05:39.082873  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:05:39.082885  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:05:39.082900  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:05:39.082911  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:05:39.082927  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:05:39.082976  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:05:39.083018  181774 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:05:39.083032  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:05:39.083072  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:05:39.083101  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:05:39.083130  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:05:39.083178  181774 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:05:39.083210  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /usr/share/ca-certificates/41952.pem
	I1228 07:05:39.083231  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:39.083247  181774 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem -> /usr/share/ca-certificates/4195.pem
	I1228 07:05:39.083747  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:05:39.107135  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:05:39.126767  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:05:39.146552  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:05:39.165978  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:05:39.184656  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:05:39.203296  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:05:39.222656  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-env-782848/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:05:39.241462  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:05:39.260390  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:05:39.279593  181774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:05:39.299070  181774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:05:39.313383  181774 ssh_runner.go:195] Run: openssl version
	I1228 07:05:39.320100  181774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:05:39.328406  181774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:05:39.336559  181774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:05:39.340626  181774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:05:39.340709  181774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:05:39.392504  181774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:05:39.400687  181774 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:05:39.408861  181774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:39.417134  181774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:05:39.425291  181774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:39.429586  181774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:39.429684  181774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:05:39.472226  181774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:05:39.480288  181774 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:05:39.488012  181774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:05:39.495841  181774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:05:39.503985  181774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:05:39.508213  181774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:05:39.508296  181774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:05:39.550152  181774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:05:39.558398  181774 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
	I1228 07:05:39.566228  181774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:05:39.570761  181774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:05:39.570823  181774 kubeadm.go:401] StartCluster: {Name:force-systemd-env-782848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-782848 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:39.570944  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:05:39.582225  181774 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:05:39Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:05:39.582314  181774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:05:39.592354  181774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:05:39.600876  181774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:05:39.600951  181774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:05:39.611618  181774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:05:39.611643  181774 kubeadm.go:158] found existing configuration files:
	
	I1228 07:05:39.611704  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:05:39.621030  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:05:39.621104  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:05:39.629284  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:05:39.638425  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:05:39.638501  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:05:39.646666  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:05:39.655700  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:05:39.655778  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:05:39.663728  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:05:39.672584  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:05:39.672661  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:05:39.680437  181774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:05:39.731263  181774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:05:39.731821  181774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:05:39.922641  181774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:05:39.922728  181774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:05:39.922775  181774 kubeadm.go:319] OS: Linux
	I1228 07:05:39.922829  181774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:05:39.922885  181774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:05:39.922941  181774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:05:39.922997  181774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:05:39.923062  181774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:05:39.923121  181774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:05:39.923174  181774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:05:39.923233  181774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:05:39.923289  181774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:05:40.021350  181774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:05:40.021505  181774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:05:40.021616  181774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:05:40.039678  181774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:05:40.046419  181774 out.go:252]   - Generating certificates and keys ...
	I1228 07:05:40.046525  181774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:05:40.046602  181774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:05:40.242083  181774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:05:40.360300  181774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:05:40.739270  181774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:05:41.196655  181774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:05:41.550169  181774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:05:41.550770  181774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:05:41.766417  181774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:05:41.767024  181774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:05:42.020077  181774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:05:42.130716  181774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:05:42.287721  181774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:05:42.290092  181774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:05:42.597323  181774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:05:42.772367  181774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:05:43.352469  181774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:05:43.601678  181774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:05:43.720636  181774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:05:43.720794  181774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:05:43.723356  181774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:05:43.726620  181774 out.go:252]   - Booting up control plane ...
	I1228 07:05:43.726795  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:05:43.726939  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:05:43.727032  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:05:43.750683  181774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:05:43.751098  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:05:43.768085  181774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:05:43.770566  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:05:43.770684  181774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:05:43.966587  181774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:05:43.966769  181774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:09:43.967198  181774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000653379s
	I1228 07:09:43.967234  181774 kubeadm.go:319] 
	I1228 07:09:43.967299  181774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:09:43.967340  181774 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:09:43.967449  181774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:09:43.967462  181774 kubeadm.go:319] 
	I1228 07:09:43.967567  181774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:09:43.967602  181774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:09:43.967633  181774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:09:43.967638  181774 kubeadm.go:319] 
	I1228 07:09:43.972750  181774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:09:43.973181  181774 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:09:43.973296  181774 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:09:43.973533  181774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:09:43.973543  181774 kubeadm.go:319] 
	I1228 07:09:43.973612  181774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:09:43.973724  181774 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000653379s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-782848 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000653379s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:09:43.973806  181774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1228 07:09:44.390586  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:09:44.406430  181774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:09:44.406502  181774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:09:44.414508  181774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:09:44.414531  181774 kubeadm.go:158] found existing configuration files:
	
	I1228 07:09:44.414592  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:09:44.422506  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:09:44.422572  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:09:44.430102  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:09:44.437979  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:09:44.438057  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:09:44.445502  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:09:44.453241  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:09:44.453305  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:09:44.460447  181774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:09:44.468559  181774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:09:44.468681  181774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:09:44.476284  181774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:09:44.591840  181774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:09:44.592297  181774 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:09:44.656418  181774 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:13:46.131203  181774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:13:46.131234  181774 kubeadm.go:319] 
	I1228 07:13:46.131305  181774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:13:46.134944  181774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:13:46.135004  181774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:13:46.135170  181774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:13:46.135234  181774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:13:46.135278  181774 kubeadm.go:319] OS: Linux
	I1228 07:13:46.135325  181774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:13:46.135373  181774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:13:46.135427  181774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:13:46.135481  181774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:13:46.135529  181774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:13:46.135577  181774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:13:46.135622  181774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:13:46.135670  181774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:13:46.135716  181774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:13:46.135788  181774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:13:46.135883  181774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:13:46.135972  181774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:13:46.136035  181774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:13:46.138852  181774 out.go:252]   - Generating certificates and keys ...
	I1228 07:13:46.138959  181774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:13:46.139056  181774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:13:46.139166  181774 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:13:46.139258  181774 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:13:46.139346  181774 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:13:46.139405  181774 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:13:46.139472  181774 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:13:46.139536  181774 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:13:46.139614  181774 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:13:46.139689  181774 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:13:46.139731  181774 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:13:46.139790  181774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:13:46.139844  181774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:13:46.139904  181774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:13:46.139960  181774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:13:46.140027  181774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:13:46.140085  181774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:13:46.140174  181774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:13:46.140243  181774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:13:46.143487  181774 out.go:252]   - Booting up control plane ...
	I1228 07:13:46.143598  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:13:46.143682  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:13:46.143752  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:13:46.143858  181774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:13:46.143955  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:13:46.144062  181774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:13:46.144149  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:13:46.144192  181774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:13:46.144327  181774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:13:46.144434  181774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:13:46.144522  181774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000422032s
	I1228 07:13:46.144530  181774 kubeadm.go:319] 
	I1228 07:13:46.144588  181774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:13:46.144624  181774 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:13:46.144733  181774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:13:46.144742  181774 kubeadm.go:319] 
	I1228 07:13:46.144848  181774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:13:46.144882  181774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:13:46.144917  181774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:13:46.144980  181774 kubeadm.go:403] duration metric: took 8m6.574168929s to StartCluster
	I1228 07:13:46.145074  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:13:46.145174  181774 kubeadm.go:319] 
	E1228 07:13:46.156921  181774 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.156996  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.168417  181774 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.168586  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.179696  181774 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.179763  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.191266  181774 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.191335  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.202615  181774 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.202687  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.219597  181774 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.219668  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.235479  181774 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.235511  181774 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:13:46.235524  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:13:46.308281  181774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:13:46.298689    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.299681    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.301427    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.302038    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.303680    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:13:46.298689    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.299681    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.301427    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.302038    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.303680    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:13:46.308310  181774 logs.go:123] Gathering logs for containerd ...
	I1228 07:13:46.308336  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1228 07:13:46.349083  181774 logs.go:123] Gathering logs for container status ...
	I1228 07:13:46.349124  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:13:46.381118  181774 logs.go:123] Gathering logs for kubelet ...
	I1228 07:13:46.381146  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:13:46.442596  181774 logs.go:123] Gathering logs for dmesg ...
	I1228 07:13:46.442634  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1228 07:13:46.455522  181774 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:13:46.455575  181774 out.go:285] * 
	* 
	W1228 07:13:46.455625  181774 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:13:46.455641  181774 out.go:285] * 
	* 
	W1228 07:13:46.455887  181774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:13:46.460868  181774 out.go:203] 
	W1228 07:13:46.464692  181774 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:13:46.464753  181774 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:13:46.464779  181774 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:13:46.467843  181774 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-782848 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-782848 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-28 07:13:46.927323104 +0000 UTC m=+2749.579152492
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-782848
helpers_test.go:244: (dbg) docker inspect force-systemd-env-782848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308",
	        "Created": "2025-12-28T07:05:30.301249216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:05:30.383600006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308/hosts",
	        "LogPath": "/var/lib/docker/containers/2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308/2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308-json.log",
	        "Name": "/force-systemd-env-782848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-782848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-782848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b5b17a993235b43c8307fd673719a30b1c0b6c784d1ad071061bc28ccb28308",
	                "LowerDir": "/var/lib/docker/overlay2/83bfa6f55d57528ac043c8675d30df00d39b31ab96c2c71b0bd8816872aaee87-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83bfa6f55d57528ac043c8675d30df00d39b31ab96c2c71b0bd8816872aaee87/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83bfa6f55d57528ac043c8675d30df00d39b31ab96c2c71b0bd8816872aaee87/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83bfa6f55d57528ac043c8675d30df00d39b31ab96c2c71b0bd8816872aaee87/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-782848",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-782848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-782848",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-782848",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-782848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f492e5194ff956a163e2f4126afd6fce62ccf82eb0f6cebf4c6538a9b4871c9b",
	            "SandboxKey": "/var/run/docker/netns/f492e5194ff9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-782848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:05:f0:5f:ba:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "60444ab3ee70fe8ab8e8a91c1ddc8802e316c18b671f75dfdc3ecfc5599af810",
	                    "EndpointID": "e79b9428b70af131eec88893fe8c10c3f0899b72f68b4013921e2c28029fa9ec",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-782848",
	                        "2b5b17a99323"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-782848 -n force-systemd-env-782848
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-782848 -n force-systemd-env-782848: exit status 6 (346.856192ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:13:47.296054  205831 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-782848" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-782848 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-742569 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat docker --no-pager                                                                             │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo docker system info                                                                                          │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cri-dockerd --version                                                                                       │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat containerd --no-pager                                                                         │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo cat /etc/containerd/config.toml                                                                             │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo containerd config dump                                                                                      │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat crio --no-pager                                                                               │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo crio config                                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ delete  │ -p cilium-742569                                                                                                                  │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ delete  │ -p cert-expiration-478620                                                                                                         │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ start   │ -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │                     │
	│ ssh     │ force-systemd-env-782848 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:11:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:11:42.715378  202182 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:11:42.715558  202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:11:42.715590  202182 out.go:374] Setting ErrFile to fd 2...
	I1228 07:11:42.715612  202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:11:42.715999  202182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:11:42.716697  202182 out.go:368] Setting JSON to false
	I1228 07:11:42.718260  202182 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3253,"bootTime":1766902650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:11:42.718337  202182 start.go:143] virtualization:  
	I1228 07:11:42.722422  202182 out.go:179] * [force-systemd-flag-257442] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:11:42.725859  202182 notify.go:221] Checking for updates...
	I1228 07:11:42.726417  202182 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:11:42.729863  202182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:11:42.733034  202182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:11:42.736198  202182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:11:42.739620  202182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:11:42.742650  202182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:11:42.746164  202182 config.go:182] Loaded profile config "force-systemd-env-782848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:11:42.746308  202182 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:11:42.770870  202182 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:11:42.770972  202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:11:42.844310  202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.83443823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:11:42.844418  202182 docker.go:319] overlay module found
	I1228 07:11:42.849492  202182 out.go:179] * Using the docker driver based on user configuration
	I1228 07:11:42.852348  202182 start.go:309] selected driver: docker
	I1228 07:11:42.852368  202182 start.go:928] validating driver "docker" against <nil>
	I1228 07:11:42.852382  202182 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:11:42.853288  202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:11:42.918090  202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.898066629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:11:42.918240  202182 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:11:42.918462  202182 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:11:42.921452  202182 out.go:179] * Using Docker driver with root privileges
	I1228 07:11:42.924398  202182 cni.go:84] Creating CNI manager for ""
	I1228 07:11:42.924520  202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:11:42.924534  202182 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:11:42.924614  202182 start.go:353] cluster config:
	{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1228 07:11:42.927742  202182 out.go:179] * Starting "force-systemd-flag-257442" primary control-plane node in "force-systemd-flag-257442" cluster
	I1228 07:11:42.930570  202182 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:11:42.933508  202182 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:11:42.936360  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:42.936405  202182 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:11:42.936416  202182 cache.go:65] Caching tarball of preloaded images
	I1228 07:11:42.936441  202182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:11:42.936533  202182 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:11:42.936546  202182 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:11:42.936653  202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
	I1228 07:11:42.936673  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json: {Name:mk1bb575eaedf054a5c39231661ba5e51bfbfb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:42.955984  202182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:11:42.956009  202182 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:11:42.956029  202182 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:11:42.956060  202182 start.go:360] acquireMachinesLock for force-systemd-flag-257442: {Name:mk182766e2370865019edd04ffc6f7524c78e636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:11:42.956174  202182 start.go:364] duration metric: took 92.899µs to acquireMachinesLock for "force-systemd-flag-257442"
	I1228 07:11:42.956203  202182 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:11:42.956270  202182 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:11:42.959751  202182 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:11:42.959984  202182 start.go:159] libmachine.API.Create for "force-systemd-flag-257442" (driver="docker")
	I1228 07:11:42.960019  202182 client.go:173] LocalClient.Create starting
	I1228 07:11:42.960087  202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
	I1228 07:11:42.960128  202182 main.go:144] libmachine: Decoding PEM data...
	I1228 07:11:42.960147  202182 main.go:144] libmachine: Parsing certificate...
	I1228 07:11:42.960199  202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
	I1228 07:11:42.960227  202182 main.go:144] libmachine: Decoding PEM data...
	I1228 07:11:42.960242  202182 main.go:144] libmachine: Parsing certificate...
	I1228 07:11:42.960646  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:11:42.976005  202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:11:42.976085  202182 network_create.go:284] running [docker network inspect force-systemd-flag-257442] to gather additional debugging logs...
	I1228 07:11:42.976106  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442
	W1228 07:11:42.991634  202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 returned with exit code 1
	I1228 07:11:42.991665  202182 network_create.go:287] error running [docker network inspect force-systemd-flag-257442]: docker network inspect force-systemd-flag-257442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-257442 not found
	I1228 07:11:42.991678  202182 network_create.go:289] output of [docker network inspect force-systemd-flag-257442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-257442 not found
	
	** /stderr **
	I1228 07:11:42.991788  202182 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:11:43.009147  202182 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
	I1228 07:11:43.009450  202182 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
	I1228 07:11:43.009714  202182 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
	I1228 07:11:43.010021  202182 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-60444ab3ee70 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:84:9e:6e:bc:3d} reservation:<nil>}
	I1228 07:11:43.010405  202182 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d72f0}
	I1228 07:11:43.010426  202182 network_create.go:124] attempt to create docker network force-systemd-flag-257442 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 07:11:43.010488  202182 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-257442 force-systemd-flag-257442
	I1228 07:11:43.066640  202182 network_create.go:108] docker network force-systemd-flag-257442 192.168.85.0/24 created
	I1228 07:11:43.066670  202182 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-257442" container
	I1228 07:11:43.066751  202182 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:11:43.086978  202182 cli_runner.go:164] Run: docker volume create force-systemd-flag-257442 --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:11:43.106995  202182 oci.go:103] Successfully created a docker volume force-systemd-flag-257442
	I1228 07:11:43.107086  202182 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-257442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --entrypoint /usr/bin/test -v force-systemd-flag-257442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:11:43.672034  202182 oci.go:107] Successfully prepared a docker volume force-systemd-flag-257442
	I1228 07:11:43.672096  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:43.672107  202182 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:11:43.672194  202182 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:11:47.619647  202182 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.947413254s)
	I1228 07:11:47.619678  202182 kic.go:203] duration metric: took 3.947567208s to extract preloaded images to volume ...
	W1228 07:11:47.619829  202182 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:11:47.619942  202182 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:11:47.682992  202182 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-257442 --name force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-257442 --network force-systemd-flag-257442 --ip 192.168.85.2 --volume force-systemd-flag-257442:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:11:47.987523  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Running}}
	I1228 07:11:48.014448  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.042972  202182 cli_runner.go:164] Run: docker exec force-systemd-flag-257442 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:11:48.101378  202182 oci.go:144] the created container "force-systemd-flag-257442" has a running status.
	I1228 07:11:48.101414  202182 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa...
	I1228 07:11:48.675904  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:11:48.675956  202182 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:11:48.704271  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.736793  202182 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:11:48.736819  202182 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-257442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:11:48.804337  202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
	I1228 07:11:48.834826  202182 machine.go:94] provisionDockerMachine start ...
	I1228 07:11:48.834944  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:48.863393  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:48.863873  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:48.863893  202182 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:11:49.032380  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
	
	I1228 07:11:49.032406  202182 ubuntu.go:182] provisioning hostname "force-systemd-flag-257442"
	I1228 07:11:49.032540  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.052336  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:49.052665  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:49.052682  202182 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-257442 && echo "force-systemd-flag-257442" | sudo tee /etc/hostname
	I1228 07:11:49.213253  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
	
	I1228 07:11:49.213336  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.236648  202182 main.go:144] libmachine: Using SSH client type: native
	I1228 07:11:49.236959  202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1228 07:11:49.236977  202182 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-257442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-257442/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-257442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:11:49.397038  202182 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:11:49.397065  202182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:11:49.397085  202182 ubuntu.go:190] setting up certificates
	I1228 07:11:49.397094  202182 provision.go:84] configureAuth start
	I1228 07:11:49.397159  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:49.420305  202182 provision.go:143] copyHostCerts
	I1228 07:11:49.420345  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:11:49.420374  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:11:49.420386  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:11:49.420564  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:11:49.420662  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:11:49.420680  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:11:49.420685  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:11:49.420715  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:11:49.420761  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:11:49.420776  202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:11:49.420780  202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:11:49.420805  202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:11:49.420852  202182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-257442 san=[127.0.0.1 192.168.85.2 force-systemd-flag-257442 localhost minikube]
	I1228 07:11:49.646258  202182 provision.go:177] copyRemoteCerts
	I1228 07:11:49.646332  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:11:49.646373  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.667681  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:49.768622  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:11:49.768692  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:11:49.786043  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:11:49.786115  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:11:49.805713  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:11:49.805777  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:11:49.824117  202182 provision.go:87] duration metric: took 427.001952ms to configureAuth
	I1228 07:11:49.824142  202182 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:11:49.824330  202182 config.go:182] Loaded profile config "force-systemd-flag-257442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:11:49.824345  202182 machine.go:97] duration metric: took 989.496866ms to provisionDockerMachine
	I1228 07:11:49.824352  202182 client.go:176] duration metric: took 6.864322529s to LocalClient.Create
	I1228 07:11:49.824369  202182 start.go:167] duration metric: took 6.864385431s to libmachine.API.Create "force-systemd-flag-257442"
	I1228 07:11:49.824377  202182 start.go:293] postStartSetup for "force-systemd-flag-257442" (driver="docker")
	I1228 07:11:49.824385  202182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:11:49.824441  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:11:49.824572  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:49.841697  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:49.940326  202182 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:11:49.943423  202182 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:11:49.943449  202182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:11:49.943460  202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:11:49.943515  202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:11:49.943595  202182 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:11:49.943601  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /etc/ssl/certs/41952.pem
	I1228 07:11:49.943695  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:11:49.950748  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:11:49.976888  202182 start.go:296] duration metric: took 152.497114ms for postStartSetup
	I1228 07:11:49.977259  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:49.998212  202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
	I1228 07:11:49.998522  202182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:11:49.998567  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.030466  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.125879  202182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:11:50.131149  202182 start.go:128] duration metric: took 7.174863789s to createHost
	I1228 07:11:50.131177  202182 start.go:83] releasing machines lock for "force-systemd-flag-257442", held for 7.174990436s
	I1228 07:11:50.131248  202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
	I1228 07:11:50.148157  202182 ssh_runner.go:195] Run: cat /version.json
	I1228 07:11:50.148166  202182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:11:50.148207  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.148236  202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
	I1228 07:11:50.172404  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.174217  202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
	I1228 07:11:50.357542  202182 ssh_runner.go:195] Run: systemctl --version
	I1228 07:11:50.363928  202182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:11:50.368163  202182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:11:50.368231  202182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:11:50.395201  202182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:11:50.395227  202182 start.go:496] detecting cgroup driver to use...
	I1228 07:11:50.395241  202182 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:11:50.395299  202182 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:11:50.410474  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:11:50.423445  202182 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:11:50.423535  202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:11:50.440554  202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:11:50.458778  202182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:11:50.577463  202182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:11:50.701377  202182 docker.go:234] disabling docker service ...
	I1228 07:11:50.701466  202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:11:50.726518  202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:11:50.741501  202182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:11:50.867242  202182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:11:50.974607  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:11:50.987492  202182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:11:51.008605  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:11:51.019015  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:11:51.028781  202182 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:11:51.028861  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:11:51.038465  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:11:51.047159  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:11:51.055758  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:11:51.064984  202182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:11:51.072909  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:11:51.081912  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:11:51.090824  202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:11:51.099899  202182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:11:51.107450  202182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:11:51.115067  202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:11:51.235548  202182 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:11:51.376438  202182 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:11:51.376630  202182 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:11:51.380725  202182 start.go:574] Will wait 60s for crictl version
	I1228 07:11:51.380800  202182 ssh_runner.go:195] Run: which crictl
	I1228 07:11:51.384409  202182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:11:51.409180  202182 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:11:51.409291  202182 ssh_runner.go:195] Run: containerd --version
	I1228 07:11:51.430646  202182 ssh_runner.go:195] Run: containerd --version
	I1228 07:11:51.460595  202182 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:11:51.463697  202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:11:51.480057  202182 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:11:51.484647  202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:11:51.495570  202182 kubeadm.go:884] updating cluster {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:11:51.495689  202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:11:51.495768  202182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:11:51.534723  202182 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:11:51.534803  202182 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:11:51.534903  202182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:11:51.559789  202182 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:11:51.559808  202182 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:11:51.559817  202182 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1228 07:11:51.559914  202182 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-257442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:11:51.559976  202182 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:11:51.585689  202182 cni.go:84] Creating CNI manager for ""
	I1228 07:11:51.585767  202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:11:51.585801  202182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:11:51.585861  202182 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-257442 NodeName:force-systemd-flag-257442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:11:51.586026  202182 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-257442"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:11:51.586150  202182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:11:51.594509  202182 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:11:51.594591  202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:11:51.602306  202182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1228 07:11:51.614702  202182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:11:51.627096  202182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1228 07:11:51.639529  202182 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:11:51.643115  202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:11:51.652078  202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:11:51.778040  202182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:11:51.796591  202182 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442 for IP: 192.168.85.2
	I1228 07:11:51.796682  202182 certs.go:195] generating shared ca certs ...
	I1228 07:11:51.796718  202182 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:51.796936  202182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:11:51.797027  202182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:11:51.797064  202182 certs.go:257] generating profile certs ...
	I1228 07:11:51.797180  202182 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key
	I1228 07:11:51.797224  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt with IP's: []
	I1228 07:11:52.013074  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt ...
	I1228 07:11:52.013118  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt: {Name:mk7aed4b1361cad35efdb364bf3318878e0ba011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.013324  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key ...
	I1228 07:11:52.013339  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key: {Name:mk8ec5637167dd5ffdf85444ad06fe325864a279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.013439  202182 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be
	I1228 07:11:52.013462  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 07:11:52.367478  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be ...
	I1228 07:11:52.367511  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be: {Name:mkda9f7af1a3a08068bbee1ddd2a4b4ef4a9f820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.367692  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be ...
	I1228 07:11:52.367707  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be: {Name:mk045bbb68239d684b49be802faad160202aaf3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.367798  202182 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt
	I1228 07:11:52.367875  202182 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key
	I1228 07:11:52.367939  202182 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key
	I1228 07:11:52.367956  202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt with IP's: []
	I1228 07:11:52.450774  202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt ...
	I1228 07:11:52.450804  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt: {Name:mkad6c1484d2eff4419d1163b5dc950a7aeb71a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.450986  202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key ...
	I1228 07:11:52.450999  202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key: {Name:mk7ffb474cec5cc67e49a8a4a4b043205762d02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:11:52.451100  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:11:52.451122  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:11:52.451135  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:11:52.451157  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:11:52.451173  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:11:52.451198  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:11:52.451213  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:11:52.451224  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:11:52.451276  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:11:52.451317  202182 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:11:52.451330  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:11:52.451359  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:11:52.451383  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:11:52.451418  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:11:52.451466  202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:11:52.451500  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.451519  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.451533  202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem -> /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.452048  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:11:52.470544  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:11:52.489878  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:11:52.510247  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:11:52.528132  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:11:52.545968  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:11:52.563355  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:11:52.580238  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:11:52.598910  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:11:52.617614  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:11:52.636247  202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:11:52.654304  202182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:11:52.667049  202182 ssh_runner.go:195] Run: openssl version
	I1228 07:11:52.673735  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.681295  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:11:52.688626  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.692403  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.692584  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:11:52.735038  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:11:52.742898  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:11:52.750466  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.758067  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:11:52.765682  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.769877  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.769968  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:11:52.810873  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:11:52.818298  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:11:52.825860  202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.833320  202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:11:52.840574  202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.844181  202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.844245  202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:11:52.885195  202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:11:52.893615  202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
	I1228 07:11:52.900889  202182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:11:52.904539  202182 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:11:52.904638  202182 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:11:52.904749  202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:11:52.915402  202182 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:11:52Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:11:52.915477  202182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:11:52.923486  202182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:11:52.931211  202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:11:52.931307  202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:11:52.939006  202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:11:52.939027  202182 kubeadm.go:158] found existing configuration files:
	
	I1228 07:11:52.939087  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:11:52.946627  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:11:52.946691  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:11:52.954506  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:11:52.963900  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:11:52.963966  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:11:52.971542  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:11:52.979414  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:11:52.979485  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:11:52.986647  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:11:52.994899  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:11:52.995009  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:11:53.003577  202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:11:53.051927  202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:11:53.056727  202182 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:11:53.128709  202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:11:53.128782  202182 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:11:53.128818  202182 kubeadm.go:319] OS: Linux
	I1228 07:11:53.128866  202182 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:11:53.128914  202182 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:11:53.128962  202182 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:11:53.129012  202182 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:11:53.129062  202182 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:11:53.129111  202182 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:11:53.129156  202182 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:11:53.129205  202182 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:11:53.129251  202182 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:11:53.196911  202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:11:53.197098  202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:11:53.197193  202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:11:53.206716  202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:11:53.210202  202182 out.go:252]   - Generating certificates and keys ...
	I1228 07:11:53.210291  202182 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:11:53.210361  202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:11:53.342406  202182 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:11:53.807332  202182 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:11:54.152653  202182 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:11:54.360536  202182 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:11:54.510375  202182 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:11:54.510779  202182 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:11:54.630196  202182 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:11:54.630431  202182 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:11:55.093747  202182 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:11:55.202960  202182 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:11:55.357297  202182 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:11:55.357650  202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:11:55.557158  202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:11:55.707761  202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:11:55.947840  202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:11:56.066861  202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:11:56.190344  202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:11:56.190993  202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:11:56.193691  202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:11:56.197563  202182 out.go:252]   - Booting up control plane ...
	I1228 07:11:56.197679  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:11:56.197771  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:11:56.197847  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:11:56.216231  202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:11:56.216354  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:11:56.223498  202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:11:56.224057  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:11:56.224309  202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:11:56.359584  202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:11:56.359704  202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:13:46.131203  181774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:13:46.131234  181774 kubeadm.go:319] 
	I1228 07:13:46.131305  181774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:13:46.134944  181774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:13:46.135004  181774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:13:46.135170  181774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:13:46.135234  181774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:13:46.135278  181774 kubeadm.go:319] OS: Linux
	I1228 07:13:46.135325  181774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:13:46.135373  181774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:13:46.135427  181774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:13:46.135481  181774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:13:46.135529  181774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:13:46.135577  181774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:13:46.135622  181774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:13:46.135670  181774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:13:46.135716  181774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:13:46.135788  181774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:13:46.135883  181774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:13:46.135972  181774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:13:46.136035  181774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:13:46.138852  181774 out.go:252]   - Generating certificates and keys ...
	I1228 07:13:46.138959  181774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:13:46.139056  181774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:13:46.139166  181774 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:13:46.139258  181774 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:13:46.139346  181774 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:13:46.139405  181774 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:13:46.139472  181774 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:13:46.139536  181774 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:13:46.139614  181774 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:13:46.139689  181774 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:13:46.139731  181774 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:13:46.139790  181774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:13:46.139844  181774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:13:46.139904  181774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:13:46.139960  181774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:13:46.140027  181774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:13:46.140085  181774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:13:46.140174  181774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:13:46.140243  181774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:13:46.143487  181774 out.go:252]   - Booting up control plane ...
	I1228 07:13:46.143598  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:13:46.143682  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:13:46.143752  181774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:13:46.143858  181774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:13:46.143955  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:13:46.144062  181774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:13:46.144149  181774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:13:46.144192  181774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:13:46.144327  181774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:13:46.144434  181774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:13:46.144522  181774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000422032s
	I1228 07:13:46.144530  181774 kubeadm.go:319] 
	I1228 07:13:46.144588  181774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:13:46.144624  181774 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:13:46.144733  181774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:13:46.144742  181774 kubeadm.go:319] 
	I1228 07:13:46.144848  181774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:13:46.144882  181774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:13:46.144917  181774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:13:46.144980  181774 kubeadm.go:403] duration metric: took 8m6.574168929s to StartCluster
	I1228 07:13:46.145074  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:13:46.145174  181774 kubeadm.go:319] 
	E1228 07:13:46.156921  181774 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.156996  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.168417  181774 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.168586  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.179696  181774 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.179763  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.191266  181774 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.191335  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.202615  181774 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.202687  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.219597  181774 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.219668  181774 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	E1228 07:13:46.235479  181774 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:13:46.235511  181774 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:13:46.235524  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:13:46.308281  181774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:13:46.298689    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.299681    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.301427    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.302038    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.303680    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:13:46.298689    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.299681    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.301427    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.302038    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:46.303680    4831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:13:46.308310  181774 logs.go:123] Gathering logs for containerd ...
	I1228 07:13:46.308336  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1228 07:13:46.349083  181774 logs.go:123] Gathering logs for container status ...
	I1228 07:13:46.349124  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 07:13:46.381118  181774 logs.go:123] Gathering logs for kubelet ...
	I1228 07:13:46.381146  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:13:46.442596  181774 logs.go:123] Gathering logs for dmesg ...
	I1228 07:13:46.442634  181774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1228 07:13:46.455522  181774 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:13:46.455575  181774 out.go:285] * 
	W1228 07:13:46.455625  181774 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:13:46.455641  181774 out.go:285] * 
	W1228 07:13:46.455887  181774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:13:46.460868  181774 out.go:203] 
	W1228 07:13:46.464692  181774 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000422032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:13:46.464753  181774 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:13:46.464779  181774 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:13:46.467843  181774 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.210661382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.210745486Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.210855378Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.210938530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211009242Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211091253Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211152686Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211217925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211291214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211384893Z" level=info msg="Connect containerd service"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.211787055Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.212566443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.224648107Z" level=info msg="Start subscribing containerd event"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.224847485Z" level=info msg="Start recovering state"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.232846539Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.233030517Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.283914012Z" level=info msg="Start event monitor"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284103487Z" level=info msg="Start cni network conf syncer for default"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284213166Z" level=info msg="Start streaming server"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284299345Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284579964Z" level=info msg="runtime interface starting up..."
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284657782Z" level=info msg="starting plugins..."
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.284730833Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 07:05:37 force-systemd-env-782848 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 28 07:05:37 force-systemd-env-782848 containerd[761]: time="2025-12-28T07:05:37.290932019Z" level=info msg="containerd successfully booted in 0.141222s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:13:47.832397    4955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:47.833144    4955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:47.834722    4955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:47.835324    4955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:13:47.837044    4955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:13:47 up 56 min,  0 user,  load average: 0.70, 1.12, 1.63
	Linux force-systemd-env-782848 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:13:44 force-systemd-env-782848 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:13:45 force-systemd-env-782848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 28 07:13:45 force-systemd-env-782848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:45 force-systemd-env-782848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:45 force-systemd-env-782848 kubelet[4773]: E1228 07:13:45.502267    4773 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:13:45 force-systemd-env-782848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:13:45 force-systemd-env-782848 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:46 force-systemd-env-782848 kubelet[4811]: E1228 07:13:46.259583    4811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:46 force-systemd-env-782848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:47 force-systemd-env-782848 kubelet[4863]: E1228 07:13:47.025280    4863 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:13:47 force-systemd-env-782848 kubelet[4943]: E1228 07:13:47.763894    4943 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:13:47 force-systemd-env-782848 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:13:47.613888  205902 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.625047  205902 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.636194  205902 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.647122  205902 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.658225  205902 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.670028  205902 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	E1228 07:13:47.681446  205902 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:13:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-782848 -n force-systemd-env-782848
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-782848 -n force-systemd-env-782848: exit status 6 (340.545972ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:13:48.278084  206040 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-782848" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-782848" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-782848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-782848
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-782848: (1.935393955s)
--- FAIL: TestForceSystemdEnv (507.58s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.95s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-133308 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-133308 --output=json --layout=cluster: exit status 2 (342.43756ms)

                                                
                                                
-- stdout --
	{"Name":"pause-133308","StatusCode":200,"StatusName":"OK","Step":"Done","StepDetail":"* Paused 0 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-133308","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":200,"StatusName":"OK"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 200
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/VerifyStatus]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-133308
helpers_test.go:244: (dbg) docker inspect pause-133308:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69",
	        "Created": "2025-12-28T06:57:54.896210769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 146577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:57:56.018656136Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69/hostname",
	        "HostsPath": "/var/lib/docker/containers/199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69/hosts",
	        "LogPath": "/var/lib/docker/containers/199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69/199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69-json.log",
	        "Name": "/pause-133308",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-133308:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-133308",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "199431f2451f615865128c5478d5c737428ac746db5b6f82c19af75c4ac56a69",
	                "LowerDir": "/var/lib/docker/overlay2/224a3971a369a2d316131109c7fa9ad3958d1d38a8ce821d6acea1085f0a8e01-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/224a3971a369a2d316131109c7fa9ad3958d1d38a8ce821d6acea1085f0a8e01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/224a3971a369a2d316131109c7fa9ad3958d1d38a8ce821d6acea1085f0a8e01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/224a3971a369a2d316131109c7fa9ad3958d1d38a8ce821d6acea1085f0a8e01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-133308",
	                "Source": "/var/lib/docker/volumes/pause-133308/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-133308",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-133308",
	                "name.minikube.sigs.k8s.io": "pause-133308",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72be028bb82bc782fd945cdab82a5dd3763fbd00f071ff9b57656d61a87d791c",
	            "SandboxKey": "/var/run/docker/netns/72be028bb82b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-133308": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:c0:94:57:62:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "581ae4787ade0bc5a686725b6a6eaa5a0f99b4f3c624d6e0c7486bfb3cfbcd2b",
	                    "EndpointID": "3d7bf723bd5cb1efcef0c3bcb01d07b99e8acda78cd6152ac628d882ce7498a3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-133308",
	                        "199431f2451f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-133308 -n pause-133308
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-133308 -n pause-133308: exit status 2 (384.582116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-133308 logs -n 25
helpers_test.go:261: TestPause/serial/VerifyStatus logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-764486                                                                                               │ multinode-764486            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ start   │ -p multinode-764486-m02 --driver=docker  --container-runtime=containerd                                                │ multinode-764486-m02        │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ start   │ -p multinode-764486-m03 --driver=docker  --container-runtime=containerd                                                │ multinode-764486-m03        │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ node    │ add -p multinode-764486                                                                                                │ multinode-764486            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ delete  │ -p multinode-764486-m03                                                                                                │ multinode-764486-m03        │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p multinode-764486                                                                                                    │ multinode-764486            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p scheduled-stop-018474 --memory=3072 --driver=docker  --container-runtime=containerd                                 │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ stop    │ -p scheduled-stop-018474 --schedule 5m -v=5 --alsologtostderr                                                          │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 5m -v=5 --alsologtostderr                                                          │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 5m -v=5 --alsologtostderr                                                          │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --cancel-scheduled                                                                            │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr                                                         │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p scheduled-stop-018474                                                                                               │ scheduled-stop-018474       │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p insufficient-storage-499335 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd │ insufficient-storage-499335 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ delete  │ -p insufficient-storage-499335                                                                                         │ insufficient-storage-499335 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p pause-133308 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd        │ pause-133308                │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:58 UTC │
	│ start   │ -p missing-upgrade-934782 --memory=3072 --driver=docker  --container-runtime=containerd                                │ missing-upgrade-934782      │ jenkins │ v1.35.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:58 UTC │
	│ start   │ -p pause-133308 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                 │ pause-133308                │ jenkins │ v1.37.0 │ 28 Dec 25 06:58 UTC │ 28 Dec 25 06:58 UTC │
	│ start   │ -p missing-upgrade-934782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd         │ missing-upgrade-934782      │ jenkins │ v1.37.0 │ 28 Dec 25 06:58 UTC │                     │
	│ pause   │ -p pause-133308 --alsologtostderr -v=5                                                                                 │ pause-133308                │ jenkins │ v1.37.0 │ 28 Dec 25 06:58 UTC │ 28 Dec 25 06:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:58:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:58:46.477353  152106 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:58:46.477572  152106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:58:46.477600  152106 out.go:374] Setting ErrFile to fd 2...
	I1228 06:58:46.477618  152106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:58:46.477895  152106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:58:46.478317  152106 out.go:368] Setting JSON to false
	I1228 06:58:46.480057  152106 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2476,"bootTime":1766902650,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 06:58:46.480176  152106 start.go:143] virtualization:  
	I1228 06:58:46.485541  152106 out.go:179] * [missing-upgrade-934782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 06:58:46.488717  152106 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:58:46.488798  152106 notify.go:221] Checking for updates...
	I1228 06:58:46.492238  152106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:58:46.495495  152106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:58:46.498339  152106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 06:58:46.501265  152106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 06:58:46.504713  152106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:58:46.508148  152106 config.go:182] Loaded profile config "missing-upgrade-934782": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1228 06:58:46.511724  152106 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1228 06:58:46.515114  152106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:58:46.554164  152106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:58:46.554356  152106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:58:46.653518  152106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:58:46.642886323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:58:46.653625  152106 docker.go:319] overlay module found
	I1228 06:58:46.656747  152106 out.go:179] * Using the docker driver based on existing profile
	I1228 06:58:46.659476  152106 start.go:309] selected driver: docker
	I1228 06:58:46.659495  152106 start.go:928] validating driver "docker" against &{Name:missing-upgrade-934782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-934782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:58:46.659595  152106 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:58:46.660263  152106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:58:46.748371  152106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:58:46.738359955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:58:46.748703  152106 cni.go:84] Creating CNI manager for ""
	I1228 06:58:46.748761  152106 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:58:46.748807  152106 start.go:353] cluster config:
	{Name:missing-upgrade-934782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-934782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:conta
inerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:58:46.751994  152106 out.go:179] * Starting "missing-upgrade-934782" primary control-plane node in "missing-upgrade-934782" cluster
	I1228 06:58:46.754976  152106 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 06:58:46.757824  152106 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:58:46.760647  152106 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I1228 06:58:46.760697  152106 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I1228 06:58:46.760707  152106 cache.go:65] Caching tarball of preloaded images
	I1228 06:58:46.760798  152106 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 06:58:46.760808  152106 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I1228 06:58:46.760917  152106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/missing-upgrade-934782/config.json ...
	I1228 06:58:46.761118  152106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1228 06:58:46.789488  152106 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1228 06:58:46.789507  152106 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1228 06:58:46.789522  152106 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:58:46.789551  152106 start.go:360] acquireMachinesLock for missing-upgrade-934782: {Name:mkc8e78ac70531bc2180f00bc280010d67b853c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:58:46.789601  152106 start.go:364] duration metric: took 34.405µs to acquireMachinesLock for "missing-upgrade-934782"
	I1228 06:58:46.789620  152106 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:58:46.789625  152106 fix.go:54] fixHost starting: 
	I1228 06:58:46.789911  152106 cli_runner.go:164] Run: docker container inspect missing-upgrade-934782 --format={{.State.Status}}
	W1228 06:58:46.810578  152106 cli_runner.go:211] docker container inspect missing-upgrade-934782 --format={{.State.Status}} returned with exit code 1
	I1228 06:58:46.810634  152106 fix.go:112] recreateIfNeeded on missing-upgrade-934782: state= err=unknown state "missing-upgrade-934782": docker container inspect missing-upgrade-934782 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934782
	I1228 06:58:46.810649  152106 fix.go:117] machineExists: false. err=machine does not exist
	I1228 06:58:46.813960  152106 out.go:179] * docker "missing-upgrade-934782" container is missing, will recreate.
	I1228 06:58:46.070661  151082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/pause-133308/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:58:46.095761  151082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 06:58:46.126206  151082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:58:46.165529  151082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 06:58:46.189117  151082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:58:46.204025  151082 ssh_runner.go:195] Run: openssl version
	I1228 06:58:46.213262  151082 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 06:58:46.224180  151082 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 06:58:46.237932  151082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 06:58:46.244316  151082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 06:58:46.244446  151082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 06:58:46.297821  151082 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:58:46.309898  151082 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 06:58:46.319692  151082 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 06:58:46.342892  151082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 06:58:46.349134  151082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 06:58:46.349248  151082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 06:58:46.400747  151082 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:58:46.409032  151082 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:58:46.416525  151082 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:58:46.424323  151082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:58:46.428825  151082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:58:46.428898  151082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:58:46.473030  151082 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:58:46.481313  151082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:58:46.486074  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:58:46.532349  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:58:46.581721  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:58:46.627348  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:58:46.676137  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:58:46.724826  151082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:58:46.768800  151082 kubeadm.go:401] StartCluster: {Name:pause-133308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-133308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fal
se registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:58:46.768961  151082 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 06:58:46.800059  151082 cri.go:83] list returned 14 containers
	I1228 06:58:46.800126  151082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:58:46.811342  151082 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:58:46.811362  151082 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:58:46.811419  151082 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:58:46.823086  151082 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:58:46.823729  151082 kubeconfig.go:125] found "pause-133308" server: "https://192.168.76.2:8443"
	I1228 06:58:46.824750  151082 kapi.go:59] client config for pause-133308: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/profiles/pause-133308/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/profiles/pause-133308/client.key", CAFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:58:46.825354  151082 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1228 06:58:46.825375  151082 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1228 06:58:46.825381  151082 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1228 06:58:46.825386  151082 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1228 06:58:46.825390  151082 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1228 06:58:46.825394  151082 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1228 06:58:46.825663  151082 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:58:46.838832  151082 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 06:58:46.838870  151082 kubeadm.go:602] duration metric: took 27.50012ms to restartPrimaryControlPlane
	I1228 06:58:46.838880  151082 kubeadm.go:403] duration metric: took 70.089335ms to StartCluster
	I1228 06:58:46.838895  151082 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:58:46.838951  151082 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:58:46.839925  151082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:58:46.840176  151082 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 06:58:46.840548  151082 config.go:182] Loaded profile config "pause-133308": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:58:46.840608  151082 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:58:46.843459  151082 out.go:179] * Enabled addons: 
	I1228 06:58:46.843546  151082 out.go:179] * Verifying Kubernetes components...
	I1228 06:58:46.847864  151082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:58:46.848222  151082 addons.go:530] duration metric: took 7.607646ms for enable addons: enabled=[]
	I1228 06:58:47.011450  151082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:58:47.025054  151082 node_ready.go:35] waiting up to 6m0s for node "pause-133308" to be "Ready" ...
	I1228 06:58:47.040016  151082 node_ready.go:49] node "pause-133308" is "Ready"
	I1228 06:58:47.040046  151082 node_ready.go:38] duration metric: took 14.96225ms for node "pause-133308" to be "Ready" ...
	I1228 06:58:47.040060  151082 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:58:47.040117  151082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:58:47.053271  151082 api_server.go:72] duration metric: took 213.056494ms to wait for apiserver process to appear ...
	I1228 06:58:47.053296  151082 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:58:47.053316  151082 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:58:47.061680  151082 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:58:47.062661  151082 api_server.go:141] control plane version: v1.35.0
	I1228 06:58:47.062684  151082 api_server.go:131] duration metric: took 9.380614ms to wait for apiserver health ...
	I1228 06:58:47.062694  151082 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:58:47.065969  151082 system_pods.go:59] 7 kube-system pods found
	I1228 06:58:47.066004  151082 system_pods.go:61] "coredns-7d764666f9-pwgw6" [e55605a6-43a3-4d8b-989b-90edf420010c] Running
	I1228 06:58:47.066011  151082 system_pods.go:61] "etcd-pause-133308" [52f252c2-e6ee-4052-9f7f-f0c3883b8001] Running
	I1228 06:58:47.066015  151082 system_pods.go:61] "kindnet-28dqv" [0387cd98-e388-4549-8c53-f406c95b8e1f] Running
	I1228 06:58:47.066020  151082 system_pods.go:61] "kube-apiserver-pause-133308" [4157ba77-f450-4bc9-87ca-87d5f9b805b7] Running
	I1228 06:58:47.066024  151082 system_pods.go:61] "kube-controller-manager-pause-133308" [29d1355b-923b-46df-b9e2-6f48f44b7613] Running
	I1228 06:58:47.066028  151082 system_pods.go:61] "kube-proxy-wswcd" [6e7e586c-fbbf-4621-8175-1e0a1720ab95] Running
	I1228 06:58:47.066033  151082 system_pods.go:61] "kube-scheduler-pause-133308" [3baa2ba5-c8fb-43b0-bf11-257e56a4c5db] Running
	I1228 06:58:47.066039  151082 system_pods.go:74] duration metric: took 3.339543ms to wait for pod list to return data ...
	I1228 06:58:47.066051  151082 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:58:47.068597  151082 default_sa.go:45] found service account: "default"
	I1228 06:58:47.068624  151082 default_sa.go:55] duration metric: took 2.567119ms for default service account to be created ...
	I1228 06:58:47.068634  151082 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:58:47.071509  151082 system_pods.go:86] 7 kube-system pods found
	I1228 06:58:47.071580  151082 system_pods.go:89] "coredns-7d764666f9-pwgw6" [e55605a6-43a3-4d8b-989b-90edf420010c] Running
	I1228 06:58:47.071594  151082 system_pods.go:89] "etcd-pause-133308" [52f252c2-e6ee-4052-9f7f-f0c3883b8001] Running
	I1228 06:58:47.071600  151082 system_pods.go:89] "kindnet-28dqv" [0387cd98-e388-4549-8c53-f406c95b8e1f] Running
	I1228 06:58:47.071605  151082 system_pods.go:89] "kube-apiserver-pause-133308" [4157ba77-f450-4bc9-87ca-87d5f9b805b7] Running
	I1228 06:58:47.071609  151082 system_pods.go:89] "kube-controller-manager-pause-133308" [29d1355b-923b-46df-b9e2-6f48f44b7613] Running
	I1228 06:58:47.071614  151082 system_pods.go:89] "kube-proxy-wswcd" [6e7e586c-fbbf-4621-8175-1e0a1720ab95] Running
	I1228 06:58:47.071619  151082 system_pods.go:89] "kube-scheduler-pause-133308" [3baa2ba5-c8fb-43b0-bf11-257e56a4c5db] Running
	I1228 06:58:47.071625  151082 system_pods.go:126] duration metric: took 2.985799ms to wait for k8s-apps to be running ...
	I1228 06:58:47.071637  151082 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:58:47.071691  151082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:58:47.084403  151082 system_svc.go:56] duration metric: took 12.758336ms WaitForService to wait for kubelet
	I1228 06:58:47.084502  151082 kubeadm.go:587] duration metric: took 244.291377ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:58:47.084530  151082 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:58:47.087579  151082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 06:58:47.087617  151082 node_conditions.go:123] node cpu capacity is 2
	I1228 06:58:47.087632  151082 node_conditions.go:105] duration metric: took 3.095528ms to run NodePressure ...
	I1228 06:58:47.087646  151082 start.go:242] waiting for startup goroutines ...
	I1228 06:58:47.087656  151082 start.go:247] waiting for cluster config update ...
	I1228 06:58:47.087670  151082 start.go:256] writing updated cluster config ...
	I1228 06:58:47.088018  151082 ssh_runner.go:195] Run: rm -f paused
	I1228 06:58:47.095254  151082 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:58:47.095915  151082 kapi.go:59] client config for pause-133308: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/profiles/pause-133308/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/profiles/pause-133308/client.key", CAFile:"/home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:58:47.098980  151082 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pwgw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.104043  151082 pod_ready.go:94] pod "coredns-7d764666f9-pwgw6" is "Ready"
	I1228 06:58:47.104069  151082 pod_ready.go:86] duration metric: took 5.059841ms for pod "coredns-7d764666f9-pwgw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.106583  151082 pod_ready.go:83] waiting for pod "etcd-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.111508  151082 pod_ready.go:94] pod "etcd-pause-133308" is "Ready"
	I1228 06:58:47.111546  151082 pod_ready.go:86] duration metric: took 4.936723ms for pod "etcd-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.113946  151082 pod_ready.go:83] waiting for pod "kube-apiserver-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.118416  151082 pod_ready.go:94] pod "kube-apiserver-pause-133308" is "Ready"
	I1228 06:58:47.118442  151082 pod_ready.go:86] duration metric: took 4.469738ms for pod "kube-apiserver-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.120840  151082 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.499693  151082 pod_ready.go:94] pod "kube-controller-manager-pause-133308" is "Ready"
	I1228 06:58:47.499720  151082 pod_ready.go:86] duration metric: took 378.857674ms for pod "kube-controller-manager-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:47.700448  151082 pod_ready.go:83] waiting for pod "kube-proxy-wswcd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:48.099966  151082 pod_ready.go:94] pod "kube-proxy-wswcd" is "Ready"
	I1228 06:58:48.100003  151082 pod_ready.go:86] duration metric: took 399.505708ms for pod "kube-proxy-wswcd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:48.298854  151082 pod_ready.go:83] waiting for pod "kube-scheduler-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:48.699241  151082 pod_ready.go:94] pod "kube-scheduler-pause-133308" is "Ready"
	I1228 06:58:48.699270  151082 pod_ready.go:86] duration metric: took 400.388671ms for pod "kube-scheduler-pause-133308" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:58:48.699284  151082 pod_ready.go:40] duration metric: took 1.60398524s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:58:48.751041  151082 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1228 06:58:48.754305  151082 out.go:203] 
	W1228 06:58:48.757079  151082 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1228 06:58:48.759910  151082 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:58:48.763796  151082 out.go:179] * Done! kubectl is now configured to use "pause-133308" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e1be67e6f2907       e08f4d9d2e6ed       12 seconds ago      Running             coredns                   0                   1d5e553f976a7       coredns-7d764666f9-pwgw6               kube-system
	1e87ed1482ef3       c96ee3c174987       23 seconds ago      Running             kindnet-cni               0                   587e88d889b01       kindnet-28dqv                          kube-system
	38e145afccb94       de369f46c2ff5       25 seconds ago      Running             kube-proxy                0                   c010b5e086731       kube-proxy-wswcd                       kube-system
	00e5d489cad40       ddc8422d4d35a       39 seconds ago      Running             kube-scheduler            0                   9cfc1c5ebb543       kube-scheduler-pause-133308            kube-system
	2f9ef175f45fc       c3fcf259c473a       39 seconds ago      Running             kube-apiserver            0                   fa085d573255c       kube-apiserver-pause-133308            kube-system
	676afba8b3369       271e49a0ebc56       39 seconds ago      Running             etcd                      0                   29b9548907287       etcd-pause-133308                      kube-system
	9ae98c4aefdf9       88898f1d1a62a       39 seconds ago      Running             kube-controller-manager   0                   1b280167dab07       kube-controller-manager-pause-133308   kube-system
	
	
	==> containerd <==
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.014342688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.014437852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.014519314Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.014686676Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015229566Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015377186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015492117Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015594920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015724243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015817208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.015974519Z" level=info msg="Connect containerd service"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.020824101Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.105611119Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.105923475Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.105814067Z" level=info msg="Start subscribing containerd event"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.114609058Z" level=info msg="Start recovering state"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272437314Z" level=info msg="Start event monitor"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272599966Z" level=info msg="Start cni network conf syncer for default"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272625739Z" level=info msg="Start streaming server"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272636881Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272646646Z" level=info msg="runtime interface starting up..."
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272674273Z" level=info msg="starting plugins..."
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.272690199Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 06:58:45 pause-133308 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 28 06:58:45 pause-133308 containerd[2336]: time="2025-12-28T06:58:45.291672501Z" level=info msg="containerd successfully booted in 0.494470s"
	
	
	==> describe nodes <==
	Name:               pause-133308
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-133308
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=pause-133308
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_58_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:58:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-133308
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:58:37 +0000   Sun, 28 Dec 2025 06:58:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:58:37 +0000   Sun, 28 Dec 2025 06:58:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:58:37 +0000   Sun, 28 Dec 2025 06:58:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:58:37 +0000   Sun, 28 Dec 2025 06:58:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-133308
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                e3620863-3d7f-4d37-b973-76aa86562d01
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-pwgw6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-pause-133308                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-28dqv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-133308             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-133308    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-wswcd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-133308             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-133308 event: Registered Node pause-133308 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:58:50 up 41 min,  0 user,  load average: 4.09, 2.45, 2.06
	Linux pause-133308 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:58:29 pause-133308 kubelet[1468]: E1228 06:58:29.608615    1468 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-133308" containerName="kube-controller-manager"
	Dec 28 06:58:29 pause-133308 kubelet[1468]: I1228 06:58:29.624664    1468 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-28dqv" podStartSLOduration=5.045997773 podStartE2EDuration="6.62464808s" podCreationTimestamp="2025-12-28 06:58:23 +0000 UTC" firstStartedPulling="2025-12-28 06:58:25.636836191 +0000 UTC m=+7.079725458" lastFinishedPulling="2025-12-28 06:58:27.215486498 +0000 UTC m=+8.658375765" observedRunningTime="2025-12-28 06:58:27.884043412 +0000 UTC m=+9.326932687" watchObservedRunningTime="2025-12-28 06:58:29.62464808 +0000 UTC m=+11.067537355"
	Dec 28 06:58:32 pause-133308 kubelet[1468]: E1228 06:58:32.354352    1468 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-133308" containerName="kube-apiserver"
	Dec 28 06:58:35 pause-133308 kubelet[1468]: E1228 06:58:35.038127    1468 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-133308" containerName="kube-scheduler"
	Dec 28 06:58:36 pause-133308 kubelet[1468]: E1228 06:58:36.730385    1468 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-133308" containerName="etcd"
	Dec 28 06:58:37 pause-133308 kubelet[1468]: I1228 06:58:37.802108    1468 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 28 06:58:37 pause-133308 kubelet[1468]: I1228 06:58:37.841911    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e55605a6-43a3-4d8b-989b-90edf420010c-config-volume\") pod \"coredns-7d764666f9-pwgw6\" (UID: \"e55605a6-43a3-4d8b-989b-90edf420010c\") " pod="kube-system/coredns-7d764666f9-pwgw6"
	Dec 28 06:58:37 pause-133308 kubelet[1468]: I1228 06:58:37.841964    1468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwzf\" (UniqueName: \"kubernetes.io/projected/e55605a6-43a3-4d8b-989b-90edf420010c-kube-api-access-hcwzf\") pod \"coredns-7d764666f9-pwgw6\" (UID: \"e55605a6-43a3-4d8b-989b-90edf420010c\") " pod="kube-system/coredns-7d764666f9-pwgw6"
	Dec 28 06:58:38 pause-133308 kubelet[1468]: E1228 06:58:38.894276    1468 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pwgw6" containerName="coredns"
	Dec 28 06:58:38 pause-133308 kubelet[1468]: I1228 06:58:38.921265    1468 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pwgw6" podStartSLOduration=14.921251325 podStartE2EDuration="14.921251325s" podCreationTimestamp="2025-12-28 06:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:58:38.920954287 +0000 UTC m=+20.363843562" watchObservedRunningTime="2025-12-28 06:58:38.921251325 +0000 UTC m=+20.364140591"
	Dec 28 06:58:39 pause-133308 kubelet[1468]: E1228 06:58:39.896121    1468 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pwgw6" containerName="coredns"
	Dec 28 06:58:40 pause-133308 kubelet[1468]: E1228 06:58:40.898174    1468 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pwgw6" containerName="coredns"
	Dec 28 06:58:44 pause-133308 kubelet[1468]: W1228 06:58:44.766081    1468 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory"
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.767539    1468 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.768449    1468 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.768659    1468 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.768739    1468 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Dec 28 06:58:44 pause-133308 kubelet[1468]: W1228 06:58:44.867659    1468 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory"
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.911757    1468 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.911811    1468 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Dec 28 06:58:44 pause-133308 kubelet[1468]: E1228 06:58:44.911841    1468 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Dec 28 06:58:45 pause-133308 kubelet[1468]: W1228 06:58:45.028435    1468 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory"
	Dec 28 06:58:49 pause-133308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:58:49 pause-133308 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:58:49 pause-133308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-133308 -n pause-133308
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-133308 -n pause-133308: exit status 2 (334.735338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-133308 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/VerifyStatus FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/VerifyStatus (1.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-251758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-251758 -n old-k8s-version-251758
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-251758 -n old-k8s-version-251758: exit status 2 (340.840332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-251758 -n old-k8s-version-251758
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-251758 -n old-k8s-version-251758: exit status 2 (347.932071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-251758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-251758 -n old-k8s-version-251758
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-251758 -n old-k8s-version-251758
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-251758
helpers_test.go:244: (dbg) docker inspect old-k8s-version-251758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819",
	        "Created": "2025-12-28T07:14:24.025861118Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:15:41.80609911Z",
	            "FinishedAt": "2025-12-28T07:15:41.02748911Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/hosts",
	        "LogPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819-json.log",
	        "Name": "/old-k8s-version-251758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-251758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-251758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819",
	                "LowerDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-251758",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-251758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-251758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-251758",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-251758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3458d06b49877416429497880c1d8acd04dc40fe12388c98a5b392593f2ce7",
	            "SandboxKey": "/var/run/docker/netns/cb3458d06b49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-251758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:91:3b:32:f8:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1009ce3f5c8920d7e9c1b4d43959d604fdb2df80f859764bcfa6d7e7d0de0f2e",
	                    "EndpointID": "5da5834e51a589b73d437bee3fd8683a01dca291d0f6c42ec991bd8e36110f79",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-251758",
	                        "c7dc35270dec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-251758 -n old-k8s-version-251758
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-251758 logs -n 25
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-742569 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo containerd config dump                                                                                                                                                                                                        │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo crio config                                                                                                                                                                                                                   │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ delete  │ -p cilium-742569                                                                                                                                                                                                                                    │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ delete  │ -p cert-expiration-478620                                                                                                                                                                                                                           │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ start   │ -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │                     │
	│ ssh     │ force-systemd-env-782848 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ delete  │ -p force-systemd-env-782848                                                                                                                                                                                                                         │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ start   │ -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p cert-options-913529                                                                                                                                                                                                                              │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ stop    │ -p old-k8s-version-251758 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
	│ image   │ old-k8s-version-251758 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ pause   │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:15:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:15:41.515391  213781 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:15:41.515581  213781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:15:41.515613  213781 out.go:374] Setting ErrFile to fd 2...
	I1228 07:15:41.515635  213781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:15:41.516021  213781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:15:41.516571  213781 out.go:368] Setting JSON to false
	I1228 07:15:41.517470  213781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3491,"bootTime":1766902650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:15:41.517599  213781 start.go:143] virtualization:  
	I1228 07:15:41.520731  213781 out.go:179] * [old-k8s-version-251758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:15:41.523112  213781 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:15:41.523220  213781 notify.go:221] Checking for updates...
	I1228 07:15:41.529085  213781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:15:41.532102  213781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:41.534967  213781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:15:41.537946  213781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:15:41.540789  213781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:15:41.544241  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:41.547823  213781 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1228 07:15:41.550594  213781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:15:41.580200  213781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:15:41.580330  213781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:15:41.637460  213781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:15:41.627558424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:15:41.637570  213781 docker.go:319] overlay module found
	I1228 07:15:41.640718  213781 out.go:179] * Using the docker driver based on existing profile
	I1228 07:15:41.643586  213781 start.go:309] selected driver: docker
	I1228 07:15:41.643612  213781 start.go:928] validating driver "docker" against &{Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:41.643712  213781 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:15:41.644451  213781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:15:41.710516  213781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:15:41.6962249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:15:41.710971  213781 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:15:41.711042  213781 cni.go:84] Creating CNI manager for ""
	I1228 07:15:41.711160  213781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:15:41.711215  213781 start.go:353] cluster config:
	{Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:41.714349  213781 out.go:179] * Starting "old-k8s-version-251758" primary control-plane node in "old-k8s-version-251758" cluster
	I1228 07:15:41.717242  213781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:15:41.720110  213781 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:15:41.722898  213781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 07:15:41.722941  213781 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:15:41.722951  213781 cache.go:65] Caching tarball of preloaded images
	I1228 07:15:41.723036  213781 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:15:41.723045  213781 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1228 07:15:41.723165  213781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/config.json ...
	I1228 07:15:41.723395  213781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:15:41.749317  213781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:15:41.749335  213781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:15:41.749360  213781 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:15:41.749390  213781 start.go:360] acquireMachinesLock for old-k8s-version-251758: {Name:mk1109054908f5edf3f362974288170bd62da790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:15:41.749446  213781 start.go:364] duration metric: took 39.434µs to acquireMachinesLock for "old-k8s-version-251758"
	I1228 07:15:41.749464  213781 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:15:41.749470  213781 fix.go:54] fixHost starting: 
	I1228 07:15:41.749727  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:41.770078  213781 fix.go:112] recreateIfNeeded on old-k8s-version-251758: state=Stopped err=<nil>
	W1228 07:15:41.770114  213781 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:15:41.773658  213781 out.go:252] * Restarting existing docker container for "old-k8s-version-251758" ...
	I1228 07:15:41.773746  213781 cli_runner.go:164] Run: docker start old-k8s-version-251758
	I1228 07:15:42.040760  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:42.074930  213781 kic.go:430] container "old-k8s-version-251758" state is running.
	I1228 07:15:42.075353  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:42.102552  213781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/config.json ...
	I1228 07:15:42.102839  213781 machine.go:94] provisionDockerMachine start ...
	I1228 07:15:42.102904  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:42.128773  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:42.129129  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:42.129149  213781 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:15:42.129834  213781 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:15:45.284637  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-251758
	
	I1228 07:15:45.284660  213781 ubuntu.go:182] provisioning hostname "old-k8s-version-251758"
	I1228 07:15:45.284753  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.306385  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:45.306763  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:45.306777  213781 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-251758 && echo "old-k8s-version-251758" | sudo tee /etc/hostname
	I1228 07:15:45.456427  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-251758
	
	I1228 07:15:45.456546  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.486572  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:45.486922  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:45.486947  213781 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-251758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-251758/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-251758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:15:45.632823  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:15:45.632853  213781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:15:45.632877  213781 ubuntu.go:190] setting up certificates
	I1228 07:15:45.632886  213781 provision.go:84] configureAuth start
	I1228 07:15:45.632952  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:45.649262  213781 provision.go:143] copyHostCerts
	I1228 07:15:45.649341  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:15:45.649355  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:15:45.649434  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:15:45.649543  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:15:45.649555  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:15:45.649583  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:15:45.649652  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:15:45.649660  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:15:45.649685  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:15:45.649745  213781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-251758 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-251758]
	I1228 07:15:45.980477  213781 provision.go:177] copyRemoteCerts
	I1228 07:15:45.980538  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:15:45.980588  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.998381  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.100464  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:15:46.117980  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1228 07:15:46.135406  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 07:15:46.153229  213781 provision.go:87] duration metric: took 520.329832ms to configureAuth
	I1228 07:15:46.153257  213781 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:15:46.153450  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:46.153466  213781 machine.go:97] duration metric: took 4.050617401s to provisionDockerMachine
	I1228 07:15:46.153475  213781 start.go:293] postStartSetup for "old-k8s-version-251758" (driver="docker")
	I1228 07:15:46.153485  213781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:15:46.153536  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:15:46.153580  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.171103  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.268799  213781 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:15:46.272143  213781 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:15:46.272169  213781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:15:46.272181  213781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:15:46.272238  213781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:15:46.272315  213781 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:15:46.272415  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:15:46.280192  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:15:46.297442  213781 start.go:296] duration metric: took 143.951737ms for postStartSetup
	I1228 07:15:46.297521  213781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:15:46.297562  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.314194  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.409685  213781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:15:46.414371  213781 fix.go:56] duration metric: took 4.66489563s for fixHost
	I1228 07:15:46.414400  213781 start.go:83] releasing machines lock for "old-k8s-version-251758", held for 4.664945107s
	I1228 07:15:46.414478  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:46.431431  213781 ssh_runner.go:195] Run: cat /version.json
	I1228 07:15:46.431490  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.431776  213781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:15:46.431832  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.449645  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.450967  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.544154  213781 ssh_runner.go:195] Run: systemctl --version
	I1228 07:15:46.551127  213781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:15:46.632770  213781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:15:46.632846  213781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:15:46.640588  213781 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:15:46.640618  213781 start.go:496] detecting cgroup driver to use...
	I1228 07:15:46.640651  213781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:15:46.640697  213781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:15:46.658595  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:15:46.672342  213781 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:15:46.672406  213781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:15:46.688207  213781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:15:46.701972  213781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:15:46.810413  213781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:15:46.931227  213781 docker.go:234] disabling docker service ...
	I1228 07:15:46.931364  213781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:15:46.946864  213781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:15:46.963198  213781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:15:47.101216  213781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:15:47.216967  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:15:47.229631  213781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:15:47.243527  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1228 07:15:47.252139  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:15:47.260844  213781 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:15:47.260959  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:15:47.269353  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:15:47.278010  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:15:47.286605  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:15:47.295310  213781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:15:47.303417  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:15:47.312688  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:15:47.321372  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:15:47.330401  213781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:15:47.337813  213781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:15:47.345357  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:47.461373  213781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:15:47.604741  213781 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:15:47.604868  213781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:15:47.608888  213781 start.go:574] Will wait 60s for crictl version
	I1228 07:15:47.608957  213781 ssh_runner.go:195] Run: which crictl
	I1228 07:15:47.612395  213781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:15:47.639107  213781 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:15:47.639186  213781 ssh_runner.go:195] Run: containerd --version
	I1228 07:15:47.658295  213781 ssh_runner.go:195] Run: containerd --version
	I1228 07:15:47.682052  213781 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.2.1 ...
	I1228 07:15:47.684965  213781 cli_runner.go:164] Run: docker network inspect old-k8s-version-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:15:47.700740  213781 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:15:47.710622  213781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:15:47.721753  213781 kubeadm.go:884] updating cluster {Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:15:47.721881  213781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 07:15:47.721942  213781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:15:47.755054  213781 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:15:47.755078  213781 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:15:47.755137  213781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:15:47.785448  213781 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:15:47.785473  213781 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:15:47.785482  213781 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1228 07:15:47.785587  213781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-251758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:15:47.785653  213781 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:15:47.819597  213781 cni.go:84] Creating CNI manager for ""
	I1228 07:15:47.819622  213781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:15:47.819649  213781 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:15:47.819676  213781 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-251758 NodeName:old-k8s-version-251758 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:15:47.819831  213781 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-251758"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:15:47.819899  213781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1228 07:15:47.827833  213781 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:15:47.827916  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:15:47.835561  213781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1228 07:15:47.848428  213781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:15:47.861106  213781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1228 07:15:47.874384  213781 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:15:47.878131  213781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:15:47.888036  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:48.006715  213781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:15:48.024824  213781 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758 for IP: 192.168.76.2
	I1228 07:15:48.024849  213781 certs.go:195] generating shared ca certs ...
	I1228 07:15:48.024864  213781 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.025007  213781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:15:48.025071  213781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:15:48.025082  213781 certs.go:257] generating profile certs ...
	I1228 07:15:48.025180  213781 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.key
	I1228 07:15:48.025263  213781 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.key.4f865eb4
	I1228 07:15:48.025316  213781 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.key
	I1228 07:15:48.025443  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:15:48.025485  213781 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:15:48.025502  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:15:48.025539  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:15:48.025568  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:15:48.025601  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:15:48.025657  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:15:48.026254  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:15:48.048586  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:15:48.068887  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:15:48.088057  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:15:48.107371  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1228 07:15:48.125942  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:15:48.147395  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:15:48.166107  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:15:48.184148  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:15:48.204360  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:15:48.225590  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:15:48.246352  213781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:15:48.260480  213781 ssh_runner.go:195] Run: openssl version
	I1228 07:15:48.270738  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.278700  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:15:48.286403  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.291927  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.291988  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.338602  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:15:48.346162  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.353600  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:15:48.361221  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.365100  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.365163  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.406209  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:15:48.413834  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.421361  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:15:48.430877  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.436178  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.436258  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.478496  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:15:48.486765  213781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:15:48.490996  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:15:48.535220  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:15:48.577930  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:15:48.620283  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:15:48.662562  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:15:48.708725  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:15:48.761282  213781 kubeadm.go:401] StartCluster: {Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:48.761475  213781 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:15:48.773667  213781 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:15:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:15:48.773760  213781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:15:48.785418  213781 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:15:48.785441  213781 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:15:48.785535  213781 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:15:48.795145  213781 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:15:48.795563  213781 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-251758" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:48.795712  213781 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-251758" cluster setting kubeconfig missing "old-k8s-version-251758" context setting]
	I1228 07:15:48.796019  213781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.797308  213781 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:15:48.811335  213781 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 07:15:48.811372  213781 kubeadm.go:602] duration metric: took 25.924616ms to restartPrimaryControlPlane
	I1228 07:15:48.811417  213781 kubeadm.go:403] duration metric: took 50.137365ms to StartCluster
	I1228 07:15:48.811438  213781 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.811515  213781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:48.812141  213781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.812393  213781 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:15:48.812823  213781 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:15:48.812897  213781 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-251758"
	I1228 07:15:48.812910  213781 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-251758"
	W1228 07:15:48.812916  213781 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:15:48.812934  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:48.812951  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.812975  213781 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-251758"
	I1228 07:15:48.812993  213781 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-251758"
	I1228 07:15:48.813270  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.813406  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.813923  213781 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-251758"
	I1228 07:15:48.813941  213781 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-251758"
	W1228 07:15:48.813948  213781 addons.go:248] addon metrics-server should already be in state true
	I1228 07:15:48.813973  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.814398  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.820528  213781 addons.go:70] Setting dashboard=true in profile "old-k8s-version-251758"
	I1228 07:15:48.820559  213781 addons.go:239] Setting addon dashboard=true in "old-k8s-version-251758"
	W1228 07:15:48.820568  213781 addons.go:248] addon dashboard should already be in state true
	I1228 07:15:48.820620  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.821134  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.824034  213781 out.go:179] * Verifying Kubernetes components...
	I1228 07:15:48.827264  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:48.893789  213781 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:15:48.897718  213781 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:15:48.908134  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:15:48.908169  213781 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:15:48.908236  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.914857  213781 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:15:48.917963  213781 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:48.917987  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:15:48.918049  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.919896  213781 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:15:48.923678  213781 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-251758"
	W1228 07:15:48.923700  213781 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:15:48.923724  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.924138  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.926314  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:15:48.926337  213781 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:15:48.926417  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.980575  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:48.992715  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.008047  213781 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:15:49.008071  213781 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:15:49.008139  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:49.008405  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.038228  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.217505  213781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:15:49.281979  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:15:49.282049  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:15:49.312160  213781 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-251758" to be "Ready" ...
	I1228 07:15:49.346171  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:49.347310  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:15:49.347331  213781 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:15:49.390664  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:15:49.390691  213781 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:15:49.395543  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:15:49.395568  213781 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:15:49.477944  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:15:49.479893  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:15:49.479915  213781 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:15:49.487819  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:15:49.487845  213781 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:15:49.604087  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:15:49.604106  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:15:49.642211  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:15:49.789172  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:15:49.789198  213781 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:15:49.817818  213781 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1228 07:15:49.817872  213781 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1228 07:15:49.921188  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:15:49.921216  213781 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:15:50.005537  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:50.112958  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:15:50.112986  213781 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:15:50.172789  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:15:50.172813  213781 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:15:50.217608  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:15:50.217633  213781 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:15:50.292347  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:15:53.761289  213781 node_ready.go:49] node "old-k8s-version-251758" is "Ready"
	I1228 07:15:53.761330  213781 node_ready.go:38] duration metric: took 4.449086435s for node "old-k8s-version-251758" to be "Ready" ...
	I1228 07:15:53.761346  213781 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:15:53.761423  213781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:15:55.596037  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.118048746s)
	I1228 07:15:56.402936  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.760682048s)
	I1228 07:15:56.402965  213781 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-251758"
	I1228 07:15:56.620050  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.614465994s)
	I1228 07:15:57.160401  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.868007839s)
	I1228 07:15:57.160631  213781 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.399189635s)
	I1228 07:15:57.160664  213781 api_server.go:72] duration metric: took 8.348239005s to wait for apiserver process to appear ...
	I1228 07:15:57.160670  213781 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:15:57.160687  213781 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:15:57.163580  213781 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-251758 addons enable metrics-server
	
	I1228 07:15:57.166658  213781 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1228 07:15:56.359053  202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000118942s
	I1228 07:15:56.359085  202182 kubeadm.go:319] 
	I1228 07:15:56.359144  202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:15:56.359183  202182 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:15:56.359292  202182 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:15:56.359301  202182 kubeadm.go:319] 
	I1228 07:15:56.359405  202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:15:56.359441  202182 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:15:56.359476  202182 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:15:56.359484  202182 kubeadm.go:319] 
	I1228 07:15:56.372655  202182 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:15:56.373414  202182 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:15:56.373650  202182 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:15:56.374256  202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:15:56.374302  202182 kubeadm.go:319] 
	I1228 07:15:56.374426  202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:15:56.374572  202182 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000118942s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:15:56.374955  202182 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1228 07:15:56.816009  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:15:56.830130  202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:15:56.830189  202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:15:56.839676  202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:15:56.839743  202182 kubeadm.go:158] found existing configuration files:
	
	I1228 07:15:56.839818  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:15:56.848800  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:15:56.848913  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:15:56.858141  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:15:56.868016  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:15:56.868125  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:15:56.876557  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.886001  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:15:56.886129  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.894421  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:15:56.903733  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:15:56.903858  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:15:56.912105  202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:15:56.973760  202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:15:56.974624  202182 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:15:57.076378  202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:15:57.076579  202182 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:15:57.076651  202182 kubeadm.go:319] OS: Linux
	I1228 07:15:57.076720  202182 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:15:57.076805  202182 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:15:57.076885  202182 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:15:57.076967  202182 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:15:57.077050  202182 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:15:57.077135  202182 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:15:57.077218  202182 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:15:57.077302  202182 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:15:57.077386  202182 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:15:57.173412  202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:15:57.173584  202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:15:57.173716  202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:15:57.193049  202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:15:57.196350  202182 out.go:252]   - Generating certificates and keys ...
	I1228 07:15:57.196587  202182 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:15:57.196675  202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:15:57.196779  202182 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:15:57.197830  202182 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:15:57.198363  202182 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:15:57.198849  202182 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:15:57.199374  202182 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:15:57.199787  202182 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:15:57.200352  202182 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:15:57.200860  202182 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:15:57.201385  202182 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:15:57.201487  202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:15:57.595218  202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:15:57.831579  202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:15:58.069431  202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:15:58.608051  202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:15:58.960100  202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:15:58.960768  202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:15:58.963496  202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:15:57.169567  213781 addons.go:530] duration metric: took 8.356745232s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1228 07:15:57.170643  213781 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:15:57.172042  213781 api_server.go:141] control plane version: v1.28.0
	I1228 07:15:57.172072  213781 api_server.go:131] duration metric: took 11.395353ms to wait for apiserver health ...
	I1228 07:15:57.172082  213781 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:15:57.178707  213781 system_pods.go:59] 9 kube-system pods found
	I1228 07:15:57.178750  213781 system_pods.go:61] "coredns-5dd5756b68-bq24f" [d30162e5-4586-47ce-98f4-f59746df82ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:15:57.178760  213781 system_pods.go:61] "etcd-old-k8s-version-251758" [89bf89f6-a96d-47e0-b7ba-99f69754c84c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:15:57.178766  213781 system_pods.go:61] "kindnet-knhp5" [97f2cb7e-2299-417a-9a40-620c0419ebba] Running
	I1228 07:15:57.178774  213781 system_pods.go:61] "kube-apiserver-old-k8s-version-251758" [976ee1b8-eb95-46c4-8cdf-3694d7a984e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:15:57.178782  213781 system_pods.go:61] "kube-controller-manager-old-k8s-version-251758" [5734ad03-d3a1-480b-94a1-8af5cbecbf42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:15:57.178787  213781 system_pods.go:61] "kube-proxy-jnkj2" [28083a3b-a189-4b49-b091-6b08fbe9526e] Running
	I1228 07:15:57.178794  213781 system_pods.go:61] "kube-scheduler-old-k8s-version-251758" [ffe5fb7d-5d3c-4dd9-a4fb-ec4b68f60520] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:15:57.178810  213781 system_pods.go:61] "metrics-server-57f55c9bc5-rszdx" [bbdba1a8-0217-4b61-8f2d-b99adc87f35b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:15:57.178815  213781 system_pods.go:61] "storage-provisioner" [f0b49587-0166-44ac-bb29-8cc45ec5668d] Running
	I1228 07:15:57.178823  213781 system_pods.go:74] duration metric: took 6.735487ms to wait for pod list to return data ...
	I1228 07:15:57.178836  213781 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:15:57.184256  213781 default_sa.go:45] found service account: "default"
	I1228 07:15:57.184285  213781 default_sa.go:55] duration metric: took 5.443249ms for default service account to be created ...
	I1228 07:15:57.184297  213781 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:15:57.191666  213781 system_pods.go:86] 9 kube-system pods found
	I1228 07:15:57.191703  213781 system_pods.go:89] "coredns-5dd5756b68-bq24f" [d30162e5-4586-47ce-98f4-f59746df82ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:15:57.191714  213781 system_pods.go:89] "etcd-old-k8s-version-251758" [89bf89f6-a96d-47e0-b7ba-99f69754c84c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:15:57.191722  213781 system_pods.go:89] "kindnet-knhp5" [97f2cb7e-2299-417a-9a40-620c0419ebba] Running
	I1228 07:15:57.191730  213781 system_pods.go:89] "kube-apiserver-old-k8s-version-251758" [976ee1b8-eb95-46c4-8cdf-3694d7a984e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:15:57.191737  213781 system_pods.go:89] "kube-controller-manager-old-k8s-version-251758" [5734ad03-d3a1-480b-94a1-8af5cbecbf42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:15:57.191749  213781 system_pods.go:89] "kube-proxy-jnkj2" [28083a3b-a189-4b49-b091-6b08fbe9526e] Running
	I1228 07:15:57.191756  213781 system_pods.go:89] "kube-scheduler-old-k8s-version-251758" [ffe5fb7d-5d3c-4dd9-a4fb-ec4b68f60520] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:15:57.191763  213781 system_pods.go:89] "metrics-server-57f55c9bc5-rszdx" [bbdba1a8-0217-4b61-8f2d-b99adc87f35b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:15:57.191774  213781 system_pods.go:89] "storage-provisioner" [f0b49587-0166-44ac-bb29-8cc45ec5668d] Running
	I1228 07:15:57.191782  213781 system_pods.go:126] duration metric: took 7.479772ms to wait for k8s-apps to be running ...
	I1228 07:15:57.191796  213781 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:15:57.191845  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:15:57.210025  213781 system_svc.go:56] duration metric: took 18.221047ms WaitForService to wait for kubelet
	I1228 07:15:57.210054  213781 kubeadm.go:587] duration metric: took 8.39762753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:15:57.210074  213781 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:15:57.214361  213781 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:15:57.214397  213781 node_conditions.go:123] node cpu capacity is 2
	I1228 07:15:57.214410  213781 node_conditions.go:105] duration metric: took 4.330353ms to run NodePressure ...
	I1228 07:15:57.214421  213781 start.go:242] waiting for startup goroutines ...
	I1228 07:15:57.214429  213781 start.go:247] waiting for cluster config update ...
	I1228 07:15:57.214440  213781 start.go:256] writing updated cluster config ...
	I1228 07:15:57.214735  213781 ssh_runner.go:195] Run: rm -f paused
	I1228 07:15:57.229539  213781 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:15:57.234754  213781 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bq24f" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:15:59.240902  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	I1228 07:15:58.967038  202182 out.go:252]   - Booting up control plane ...
	I1228 07:15:58.967133  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:15:58.967207  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:15:58.968494  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:15:58.990175  202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:15:58.990624  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:15:58.998239  202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:15:58.998885  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:15:58.998948  202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:15:59.134789  202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:15:59.134903  202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1228 07:16:01.740903  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:04.241723  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:06.242872  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:08.243720  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:10.741300  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:13.240428  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:15.740512  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:17.740572  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:19.742210  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:21.742385  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:24.240077  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:26.249689  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	I1228 07:16:27.743605  213781 pod_ready.go:94] pod "coredns-5dd5756b68-bq24f" is "Ready"
	I1228 07:16:27.743639  213781 pod_ready.go:86] duration metric: took 30.508858315s for pod "coredns-5dd5756b68-bq24f" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.747476  213781 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.758234  213781 pod_ready.go:94] pod "etcd-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.758268  213781 pod_ready.go:86] duration metric: took 10.758661ms for pod "etcd-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.763059  213781 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.769249  213781 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.769280  213781 pod_ready.go:86] duration metric: took 6.191178ms for pod "kube-apiserver-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.772232  213781 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.941288  213781 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.941367  213781 pod_ready.go:86] duration metric: took 169.098666ms for pod "kube-controller-manager-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.139363  213781 pod_ready.go:83] waiting for pod "kube-proxy-jnkj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.538967  213781 pod_ready.go:94] pod "kube-proxy-jnkj2" is "Ready"
	I1228 07:16:28.538997  213781 pod_ready.go:86] duration metric: took 399.606289ms for pod "kube-proxy-jnkj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.739792  213781 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:29.139314  213781 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-251758" is "Ready"
	I1228 07:16:29.139341  213781 pod_ready.go:86] duration metric: took 399.522284ms for pod "kube-scheduler-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:29.139353  213781 pod_ready.go:40] duration metric: took 31.909778772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:16:29.192451  213781 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1228 07:16:29.195826  213781 out.go:203] 
	W1228 07:16:29.198720  213781 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:16:29.203006  213781 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:16:29.206228  213781 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-251758" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b37d57a2c64f3       ba04bb24b9575       5 seconds ago        Running             storage-provisioner       2                   788083d028482       storage-provisioner                              kube-system
	fcf6d49dfccc3       20b332c9a70d8       34 seconds ago       Running             kubernetes-dashboard      0                   372ac697592fd       kubernetes-dashboard-8694d4445c-p26th            kubernetes-dashboard
	3bca61fe93233       1611cd07b61d5       48 seconds ago       Running             busybox                   1                   dba3aaf2b4e88       busybox                                          default
	249571e18828c       97e04611ad434       48 seconds ago       Running             coredns                   1                   e8b7259a0ff29       coredns-5dd5756b68-bq24f                         kube-system
	2ffa335da4fa4       c96ee3c174987       49 seconds ago       Running             kindnet-cni               1                   09e7329936a53       kindnet-knhp5                                    kube-system
	03dc4b502fb77       ba04bb24b9575       49 seconds ago       Exited              storage-provisioner       1                   788083d028482       storage-provisioner                              kube-system
	24c5d7076c690       940f54a5bcae9       49 seconds ago       Running             kube-proxy                1                   5741545cfc234       kube-proxy-jnkj2                                 kube-system
	bae54f9e55d77       762dce4090c5f       55 seconds ago       Running             kube-scheduler            1                   8b6a869415018       kube-scheduler-old-k8s-version-251758            kube-system
	7883e471477c5       00543d2fe5d71       55 seconds ago       Running             kube-apiserver            1                   87303144defb7       kube-apiserver-old-k8s-version-251758            kube-system
	436face63ec1e       9cdd6470f48c8       55 seconds ago       Running             etcd                      1                   7c011e9cf9961       etcd-old-k8s-version-251758                      kube-system
	4e9273c710366       46cc66ccc7c19       55 seconds ago       Running             kube-controller-manager   1                   a923b1c11ea23       kube-controller-manager-old-k8s-version-251758   kube-system
	35509d51c1377       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   72b4b575f8389       busybox                                          default
	9699fef3994e2       97e04611ad434       About a minute ago   Exited              coredns                   0                   bdc3799df2399       coredns-5dd5756b68-bq24f                         kube-system
	434574137bc77       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   e3fdf61e26eaa       kindnet-knhp5                                    kube-system
	f81a4cf68093a       940f54a5bcae9       About a minute ago   Exited              kube-proxy                0                   b4d3e517cba7d       kube-proxy-jnkj2                                 kube-system
	0eef9c3bcfb97       46cc66ccc7c19       2 minutes ago        Exited              kube-controller-manager   0                   21326c15976df       kube-controller-manager-old-k8s-version-251758   kube-system
	a5ecfaf90d03b       762dce4090c5f       2 minutes ago        Exited              kube-scheduler            0                   6efc2cf694724       kube-scheduler-old-k8s-version-251758            kube-system
	6a28c04f07b81       00543d2fe5d71       2 minutes ago        Exited              kube-apiserver            0                   f847647f1be1c       kube-apiserver-old-k8s-version-251758            kube-system
	d90bfeb57827c       9cdd6470f48c8       2 minutes ago        Exited              etcd                      0                   62d9b32efb3e3       etcd-old-k8s-version-251758                      kube-system
	
	
	==> containerd <==
	Dec 28 07:16:40 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:40.227266946Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.920938488Z" level=info msg="StopPodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.921498633Z" level=info msg="TearDown network for sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.921556496Z" level=info msg="StopPodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.928593113Z" level=info msg="RemovePodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.928638397Z" level=info msg="Forcibly stopping sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.929088281Z" level=info msg="TearDown network for sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.932659020Z" level=info msg="Ensure that sandbox c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5 in task-service has been cleanup successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.942214200Z" level=info msg="RemovePodSandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.943787728Z" level=info msg="StopPodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.968756814Z" level=info msg="TearDown network for sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.968822111Z" level=info msg="StopPodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.969626755Z" level=info msg="RemovePodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.969761771Z" level=info msg="Forcibly stopping sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.014644629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.079744670Z" level=info msg="TearDown network for sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.084113940Z" level=info msg="Ensure that sandbox cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09 in task-service has been cleanup successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.101489637Z" level=info msg="RemovePodSandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" returns successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.907448351Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.912733281Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.915856001Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.915977060Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.920755712Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:16:44 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:44.087847590Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:16:44 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:44.087878605Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-251758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-251758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-251758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_14_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:14:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-251758
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:16:43 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-251758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                81009bb2-5b7c-4d13-9b49-9887e473afcc
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-bq24f                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     103s
	  kube-system                 etcd-old-k8s-version-251758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-knhp5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-251758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-251758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-jnkj2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-251758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 metrics-server-57f55c9bc5-rszdx                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         75s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-8j26z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p26th             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-251758 event: Registered Node old-k8s-version-251758 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-251758 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-251758 event: Registered Node old-k8s-version-251758 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             1s                 kubelet          Node old-k8s-version-251758 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  1s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:16:44 up 59 min,  0 user,  load average: 1.64, 1.56, 1.73
	Linux old-k8s-version-251758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630330    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/97f2cb7e-2299-417a-9a40-620c0419ebba-cni-cfg\") pod \"kindnet-knhp5\" (UID: \"97f2cb7e-2299-417a-9a40-620c0419ebba\") " pod="kube-system/kindnet-knhp5"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630384    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-kubeconfig\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630411    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-etc-ca-certificates\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630502    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/84191f4f25506808afc74712c42b7b22-etcd-certs\") pod \"etcd-old-k8s-version-251758\" (UID: \"84191f4f25506808afc74712c42b7b22\") " pod="kube-system/etcd-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630527    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-flexvolume-dir\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630614    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-usr-local-share-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630643    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28083a3b-a189-4b49-b091-6b08fbe9526e-xtables-lock\") pod \"kube-proxy-jnkj2\" (UID: \"28083a3b-a189-4b49-b091-6b08fbe9526e\") " pod="kube-system/kube-proxy-jnkj2"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630700    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aebf01ec7e15793f283ae1566d6edf2-kubeconfig\") pod \"kube-scheduler-old-k8s-version-251758\" (UID: \"5aebf01ec7e15793f283ae1566d6edf2\") " pod="kube-system/kube-scheduler-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630723    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28083a3b-a189-4b49-b091-6b08fbe9526e-lib-modules\") pod \"kube-proxy-jnkj2\" (UID: \"28083a3b-a189-4b49-b091-6b08fbe9526e\") " pod="kube-system/kube-proxy-jnkj2"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630747    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-etc-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630783    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/84191f4f25506808afc74712c42b7b22-etcd-data\") pod \"etcd-old-k8s-version-251758\" (UID: \"84191f4f25506808afc74712c42b7b22\") " pod="kube-system/etcd-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630805    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-k8s-certs\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630828    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-usr-share-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630851    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-ca-certs\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630874    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-k8s-certs\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630905    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-usr-share-ca-certificates\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630932    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97f2cb7e-2299-417a-9a40-620c0419ebba-lib-modules\") pod \"kindnet-knhp5\" (UID: \"97f2cb7e-2299-417a-9a40-620c0419ebba\") " pod="kube-system/kindnet-knhp5"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916232    2399 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916279    2399 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916578    2399 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9l9sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rszdx_kube-system(bbdba1a8-0217-4b61-8f2d-b99adc87f35b): ErrImagePull: failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916627    2399 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rszdx" podUID="bbdba1a8-0217-4b61-8f2d-b99adc87f35b"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088231    2399 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088280    2399 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088382    2399 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dzwtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-8j26z_kubernetes-dashboard(e5368324-cefa-4185-b286-ce55e50b4945): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image "registry.k8s.io/echoserver:1.4": not implemented: media type "application/vnd.docker.distribution.manifest.v1+prettyjws" is no longer supported since containerd v2.1, please rebuild the image as "application/vnd.docker.distribution.ma
nifest.v2+json" or "application/vnd.oci.image.manifest.v1+json"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088427    2399 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8j26z" podUID="e5368324-cefa-4185-b286-ce55e50b4945"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-251758 -n old-k8s-version-251758
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-251758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z: exit status 1 (115.29196ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rszdx" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-8j26z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-251758
helpers_test.go:244: (dbg) docker inspect old-k8s-version-251758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819",
	        "Created": "2025-12-28T07:14:24.025861118Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:15:41.80609911Z",
	            "FinishedAt": "2025-12-28T07:15:41.02748911Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/hosts",
	        "LogPath": "/var/lib/docker/containers/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819/c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819-json.log",
	        "Name": "/old-k8s-version-251758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-251758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-251758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c7dc35270decb04c1b16a8507b01539f68146c2cf02f1edae06ab2da08485819",
	                "LowerDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d7dc2ab15c7a91a057890bd9e6b9b83b387d6e958f45ebb31744f96d62d3db8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-251758",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-251758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-251758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-251758",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-251758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3458d06b49877416429497880c1d8acd04dc40fe12388c98a5b392593f2ce7",
	            "SandboxKey": "/var/run/docker/netns/cb3458d06b49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-251758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:91:3b:32:f8:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1009ce3f5c8920d7e9c1b4d43959d604fdb2df80f859764bcfa6d7e7d0de0f2e",
	                    "EndpointID": "5da5834e51a589b73d437bee3fd8683a01dca291d0f6c42ec991bd8e36110f79",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-251758",
	                        "c7dc35270dec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-251758 -n old-k8s-version-251758
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-251758 logs -n 25
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-742569 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo containerd config dump                                                                                                                                                                                                        │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ ssh     │ -p cilium-742569 sudo crio config                                                                                                                                                                                                                   │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │                     │
	│ delete  │ -p cilium-742569                                                                                                                                                                                                                                    │ cilium-742569             │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:08 UTC │ 28 Dec 25 07:08 UTC │
	│ start   │ -p cert-expiration-478620 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ delete  │ -p cert-expiration-478620                                                                                                                                                                                                                           │ cert-expiration-478620    │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │ 28 Dec 25 07:11 UTC │
	│ start   │ -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │                     │
	│ ssh     │ force-systemd-env-782848 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ delete  │ -p force-systemd-env-782848                                                                                                                                                                                                                         │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ start   │ -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p cert-options-913529                                                                                                                                                                                                                              │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ stop    │ -p old-k8s-version-251758 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
	│ image   │ old-k8s-version-251758 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ pause   │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:15:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:15:41.515391  213781 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:15:41.515581  213781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:15:41.515613  213781 out.go:374] Setting ErrFile to fd 2...
	I1228 07:15:41.515635  213781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:15:41.516021  213781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:15:41.516571  213781 out.go:368] Setting JSON to false
	I1228 07:15:41.517470  213781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3491,"bootTime":1766902650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:15:41.517599  213781 start.go:143] virtualization:  
	I1228 07:15:41.520731  213781 out.go:179] * [old-k8s-version-251758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:15:41.523112  213781 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:15:41.523220  213781 notify.go:221] Checking for updates...
	I1228 07:15:41.529085  213781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:15:41.532102  213781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:41.534967  213781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:15:41.537946  213781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:15:41.540789  213781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:15:41.544241  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:41.547823  213781 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1228 07:15:41.550594  213781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:15:41.580200  213781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:15:41.580330  213781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:15:41.637460  213781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:15:41.627558424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:15:41.637570  213781 docker.go:319] overlay module found
	I1228 07:15:41.640718  213781 out.go:179] * Using the docker driver based on existing profile
	I1228 07:15:41.643586  213781 start.go:309] selected driver: docker
	I1228 07:15:41.643612  213781 start.go:928] validating driver "docker" against &{Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:41.643712  213781 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:15:41.644451  213781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:15:41.710516  213781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:15:41.6962249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:15:41.710971  213781 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:15:41.711042  213781 cni.go:84] Creating CNI manager for ""
	I1228 07:15:41.711160  213781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:15:41.711215  213781 start.go:353] cluster config:
	{Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:41.714349  213781 out.go:179] * Starting "old-k8s-version-251758" primary control-plane node in "old-k8s-version-251758" cluster
	I1228 07:15:41.717242  213781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:15:41.720110  213781 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:15:41.722898  213781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 07:15:41.722941  213781 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:15:41.722951  213781 cache.go:65] Caching tarball of preloaded images
	I1228 07:15:41.723036  213781 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:15:41.723045  213781 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1228 07:15:41.723165  213781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/config.json ...
	I1228 07:15:41.723395  213781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:15:41.749317  213781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:15:41.749335  213781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:15:41.749360  213781 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:15:41.749390  213781 start.go:360] acquireMachinesLock for old-k8s-version-251758: {Name:mk1109054908f5edf3f362974288170bd62da790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:15:41.749446  213781 start.go:364] duration metric: took 39.434µs to acquireMachinesLock for "old-k8s-version-251758"
	I1228 07:15:41.749464  213781 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:15:41.749470  213781 fix.go:54] fixHost starting: 
	I1228 07:15:41.749727  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:41.770078  213781 fix.go:112] recreateIfNeeded on old-k8s-version-251758: state=Stopped err=<nil>
	W1228 07:15:41.770114  213781 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:15:41.773658  213781 out.go:252] * Restarting existing docker container for "old-k8s-version-251758" ...
	I1228 07:15:41.773746  213781 cli_runner.go:164] Run: docker start old-k8s-version-251758
	I1228 07:15:42.040760  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:42.074930  213781 kic.go:430] container "old-k8s-version-251758" state is running.
	I1228 07:15:42.075353  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:42.102552  213781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/config.json ...
	I1228 07:15:42.102839  213781 machine.go:94] provisionDockerMachine start ...
	I1228 07:15:42.102904  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:42.128773  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:42.129129  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:42.129149  213781 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:15:42.129834  213781 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:15:45.284637  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-251758
	
	I1228 07:15:45.284660  213781 ubuntu.go:182] provisioning hostname "old-k8s-version-251758"
	I1228 07:15:45.284753  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.306385  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:45.306763  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:45.306777  213781 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-251758 && echo "old-k8s-version-251758" | sudo tee /etc/hostname
	I1228 07:15:45.456427  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-251758
	
	I1228 07:15:45.456546  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.486572  213781 main.go:144] libmachine: Using SSH client type: native
	I1228 07:15:45.486922  213781 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1228 07:15:45.486947  213781 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-251758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-251758/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-251758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:15:45.632823  213781 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:15:45.632853  213781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:15:45.632877  213781 ubuntu.go:190] setting up certificates
	I1228 07:15:45.632886  213781 provision.go:84] configureAuth start
	I1228 07:15:45.632952  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:45.649262  213781 provision.go:143] copyHostCerts
	I1228 07:15:45.649341  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:15:45.649355  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:15:45.649434  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:15:45.649543  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:15:45.649555  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:15:45.649583  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:15:45.649652  213781 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:15:45.649660  213781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:15:45.649685  213781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:15:45.649745  213781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-251758 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-251758]
	I1228 07:15:45.980477  213781 provision.go:177] copyRemoteCerts
	I1228 07:15:45.980538  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:15:45.980588  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:45.998381  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.100464  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:15:46.117980  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1228 07:15:46.135406  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 07:15:46.153229  213781 provision.go:87] duration metric: took 520.329832ms to configureAuth
	I1228 07:15:46.153257  213781 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:15:46.153450  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:46.153466  213781 machine.go:97] duration metric: took 4.050617401s to provisionDockerMachine
	I1228 07:15:46.153475  213781 start.go:293] postStartSetup for "old-k8s-version-251758" (driver="docker")
	I1228 07:15:46.153485  213781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:15:46.153536  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:15:46.153580  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.171103  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.268799  213781 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:15:46.272143  213781 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:15:46.272169  213781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:15:46.272181  213781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:15:46.272238  213781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:15:46.272315  213781 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:15:46.272415  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:15:46.280192  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:15:46.297442  213781 start.go:296] duration metric: took 143.951737ms for postStartSetup
	I1228 07:15:46.297521  213781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:15:46.297562  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.314194  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.409685  213781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:15:46.414371  213781 fix.go:56] duration metric: took 4.66489563s for fixHost
	I1228 07:15:46.414400  213781 start.go:83] releasing machines lock for "old-k8s-version-251758", held for 4.664945107s
	I1228 07:15:46.414478  213781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-251758
	I1228 07:15:46.431431  213781 ssh_runner.go:195] Run: cat /version.json
	I1228 07:15:46.431490  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.431776  213781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:15:46.431832  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:46.449645  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.450967  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:46.544154  213781 ssh_runner.go:195] Run: systemctl --version
	I1228 07:15:46.551127  213781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:15:46.632770  213781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:15:46.632846  213781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:15:46.640588  213781 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:15:46.640618  213781 start.go:496] detecting cgroup driver to use...
	I1228 07:15:46.640651  213781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:15:46.640697  213781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:15:46.658595  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:15:46.672342  213781 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:15:46.672406  213781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:15:46.688207  213781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:15:46.701972  213781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:15:46.810413  213781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:15:46.931227  213781 docker.go:234] disabling docker service ...
	I1228 07:15:46.931364  213781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:15:46.946864  213781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:15:46.963198  213781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:15:47.101216  213781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:15:47.216967  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:15:47.229631  213781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:15:47.243527  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1228 07:15:47.252139  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:15:47.260844  213781 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:15:47.260959  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:15:47.269353  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:15:47.278010  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:15:47.286605  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:15:47.295310  213781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:15:47.303417  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:15:47.312688  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:15:47.321372  213781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:15:47.330401  213781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:15:47.337813  213781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:15:47.345357  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:47.461373  213781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:15:47.604741  213781 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:15:47.604868  213781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:15:47.608888  213781 start.go:574] Will wait 60s for crictl version
	I1228 07:15:47.608957  213781 ssh_runner.go:195] Run: which crictl
	I1228 07:15:47.612395  213781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:15:47.639107  213781 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:15:47.639186  213781 ssh_runner.go:195] Run: containerd --version
	I1228 07:15:47.658295  213781 ssh_runner.go:195] Run: containerd --version
	I1228 07:15:47.682052  213781 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.2.1 ...
	I1228 07:15:47.684965  213781 cli_runner.go:164] Run: docker network inspect old-k8s-version-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:15:47.700740  213781 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:15:47.710622  213781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:15:47.721753  213781 kubeadm.go:884] updating cluster {Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:15:47.721881  213781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 07:15:47.721942  213781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:15:47.755054  213781 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:15:47.755078  213781 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:15:47.755137  213781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:15:47.785448  213781 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:15:47.785473  213781 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:15:47.785482  213781 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1228 07:15:47.785587  213781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-251758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:15:47.785653  213781 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:15:47.819597  213781 cni.go:84] Creating CNI manager for ""
	I1228 07:15:47.819622  213781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:15:47.819649  213781 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:15:47.819676  213781 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-251758 NodeName:old-k8s-version-251758 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:15:47.819831  213781 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-251758"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:15:47.819899  213781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1228 07:15:47.827833  213781 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:15:47.827916  213781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:15:47.835561  213781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1228 07:15:47.848428  213781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:15:47.861106  213781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1228 07:15:47.874384  213781 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:15:47.878131  213781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:15:47.888036  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:48.006715  213781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:15:48.024824  213781 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758 for IP: 192.168.76.2
	I1228 07:15:48.024849  213781 certs.go:195] generating shared ca certs ...
	I1228 07:15:48.024864  213781 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.025007  213781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:15:48.025071  213781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:15:48.025082  213781 certs.go:257] generating profile certs ...
	I1228 07:15:48.025180  213781 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.key
	I1228 07:15:48.025263  213781 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.key.4f865eb4
	I1228 07:15:48.025316  213781 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.key
	I1228 07:15:48.025443  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:15:48.025485  213781 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:15:48.025502  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:15:48.025539  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:15:48.025568  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:15:48.025601  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:15:48.025657  213781 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:15:48.026254  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:15:48.048586  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:15:48.068887  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:15:48.088057  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:15:48.107371  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1228 07:15:48.125942  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 07:15:48.147395  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:15:48.166107  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:15:48.184148  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:15:48.204360  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:15:48.225590  213781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:15:48.246352  213781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:15:48.260480  213781 ssh_runner.go:195] Run: openssl version
	I1228 07:15:48.270738  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.278700  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:15:48.286403  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.291927  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.291988  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:15:48.338602  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:15:48.346162  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.353600  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:15:48.361221  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.365100  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.365163  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:15:48.406209  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:15:48.413834  213781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.421361  213781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:15:48.430877  213781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.436178  213781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.436258  213781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:15:48.478496  213781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:15:48.486765  213781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:15:48.490996  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:15:48.535220  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:15:48.577930  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:15:48.620283  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:15:48.662562  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:15:48.708725  213781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:15:48.761282  213781 kubeadm.go:401] StartCluster: {Name:old-k8s-version-251758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-251758 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:15:48.761475  213781 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:15:48.773667  213781 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:15:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:15:48.773760  213781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:15:48.785418  213781 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:15:48.785441  213781 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:15:48.785535  213781 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:15:48.795145  213781 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:15:48.795563  213781 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-251758" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:48.795712  213781 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-251758" cluster setting kubeconfig missing "old-k8s-version-251758" context setting]
	I1228 07:15:48.796019  213781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.797308  213781 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:15:48.811335  213781 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 07:15:48.811372  213781 kubeadm.go:602] duration metric: took 25.924616ms to restartPrimaryControlPlane
	I1228 07:15:48.811417  213781 kubeadm.go:403] duration metric: took 50.137365ms to StartCluster
	I1228 07:15:48.811438  213781 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.811515  213781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:15:48.812141  213781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:15:48.812393  213781 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:15:48.812823  213781 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:15:48.812897  213781 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-251758"
	I1228 07:15:48.812910  213781 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-251758"
	W1228 07:15:48.812916  213781 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:15:48.812934  213781 config.go:182] Loaded profile config "old-k8s-version-251758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1228 07:15:48.812951  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.812975  213781 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-251758"
	I1228 07:15:48.812993  213781 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-251758"
	I1228 07:15:48.813270  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.813406  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.813923  213781 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-251758"
	I1228 07:15:48.813941  213781 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-251758"
	W1228 07:15:48.813948  213781 addons.go:248] addon metrics-server should already be in state true
	I1228 07:15:48.813973  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.814398  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.820528  213781 addons.go:70] Setting dashboard=true in profile "old-k8s-version-251758"
	I1228 07:15:48.820559  213781 addons.go:239] Setting addon dashboard=true in "old-k8s-version-251758"
	W1228 07:15:48.820568  213781 addons.go:248] addon dashboard should already be in state true
	I1228 07:15:48.820620  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.821134  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.824034  213781 out.go:179] * Verifying Kubernetes components...
	I1228 07:15:48.827264  213781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:15:48.893789  213781 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:15:48.897718  213781 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:15:48.908134  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:15:48.908169  213781 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:15:48.908236  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.914857  213781 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:15:48.917963  213781 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:48.917987  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:15:48.918049  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.919896  213781 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:15:48.923678  213781 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-251758"
	W1228 07:15:48.923700  213781 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:15:48.923724  213781 host.go:66] Checking if "old-k8s-version-251758" exists ...
	I1228 07:15:48.924138  213781 cli_runner.go:164] Run: docker container inspect old-k8s-version-251758 --format={{.State.Status}}
	I1228 07:15:48.926314  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:15:48.926337  213781 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:15:48.926417  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:48.980575  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:48.992715  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.008047  213781 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:15:49.008071  213781 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:15:49.008139  213781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-251758
	I1228 07:15:49.008405  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.038228  213781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/old-k8s-version-251758/id_rsa Username:docker}
	I1228 07:15:49.217505  213781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:15:49.281979  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:15:49.282049  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:15:49.312160  213781 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-251758" to be "Ready" ...
	I1228 07:15:49.346171  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:49.347310  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:15:49.347331  213781 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:15:49.390664  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:15:49.390691  213781 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:15:49.395543  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:15:49.395568  213781 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:15:49.477944  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:15:49.479893  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:15:49.479915  213781 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:15:49.487819  213781 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:15:49.487845  213781 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:15:49.604087  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:15:49.604106  213781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:15:49.642211  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:15:49.789172  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:15:49.789198  213781 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1228 07:15:49.817818  213781 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1228 07:15:49.817872  213781 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1228 07:15:49.921188  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:15:49.921216  213781 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:15:50.005537  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:15:50.112958  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:15:50.112986  213781 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:15:50.172789  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:15:50.172813  213781 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:15:50.217608  213781 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:15:50.217633  213781 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:15:50.292347  213781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:15:53.761289  213781 node_ready.go:49] node "old-k8s-version-251758" is "Ready"
	I1228 07:15:53.761330  213781 node_ready.go:38] duration metric: took 4.449086435s for node "old-k8s-version-251758" to be "Ready" ...
	I1228 07:15:53.761346  213781 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:15:53.761423  213781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:15:55.596037  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.118048746s)
	I1228 07:15:56.402936  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.760682048s)
	I1228 07:15:56.402965  213781 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-251758"
	I1228 07:15:56.620050  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.614465994s)
	I1228 07:15:57.160401  213781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.868007839s)
	I1228 07:15:57.160631  213781 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.399189635s)
	I1228 07:15:57.160664  213781 api_server.go:72] duration metric: took 8.348239005s to wait for apiserver process to appear ...
	I1228 07:15:57.160670  213781 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:15:57.160687  213781 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:15:57.163580  213781 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-251758 addons enable metrics-server
	
	I1228 07:15:57.166658  213781 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1228 07:15:56.359053  202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000118942s
	I1228 07:15:56.359085  202182 kubeadm.go:319] 
	I1228 07:15:56.359144  202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:15:56.359183  202182 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:15:56.359292  202182 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:15:56.359301  202182 kubeadm.go:319] 
	I1228 07:15:56.359405  202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:15:56.359441  202182 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:15:56.359476  202182 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:15:56.359484  202182 kubeadm.go:319] 
	I1228 07:15:56.372655  202182 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:15:56.373414  202182 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:15:56.373650  202182 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:15:56.374256  202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1228 07:15:56.374302  202182 kubeadm.go:319] 
	I1228 07:15:56.374426  202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:15:56.374572  202182 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000118942s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:15:56.374955  202182 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1228 07:15:56.816009  202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:15:56.830130  202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:15:56.830189  202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:15:56.839676  202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:15:56.839743  202182 kubeadm.go:158] found existing configuration files:
	
	I1228 07:15:56.839818  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:15:56.848800  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:15:56.848913  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:15:56.858141  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:15:56.868016  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:15:56.868125  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:15:56.876557  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.886001  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:15:56.886129  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:15:56.894421  202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:15:56.903733  202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:15:56.903858  202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:15:56.912105  202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:15:56.973760  202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:15:56.974624  202182 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:15:57.076378  202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:15:57.076579  202182 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:15:57.076651  202182 kubeadm.go:319] OS: Linux
	I1228 07:15:57.076720  202182 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:15:57.076805  202182 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:15:57.076885  202182 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:15:57.076967  202182 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:15:57.077050  202182 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:15:57.077135  202182 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:15:57.077218  202182 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:15:57.077302  202182 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:15:57.077386  202182 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:15:57.173412  202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:15:57.173584  202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:15:57.173716  202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:15:57.193049  202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:15:57.196350  202182 out.go:252]   - Generating certificates and keys ...
	I1228 07:15:57.196587  202182 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:15:57.196675  202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:15:57.196779  202182 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:15:57.197830  202182 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:15:57.198363  202182 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:15:57.198849  202182 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:15:57.199374  202182 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:15:57.199787  202182 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:15:57.200352  202182 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:15:57.200860  202182 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:15:57.201385  202182 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:15:57.201487  202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:15:57.595218  202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:15:57.831579  202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:15:58.069431  202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:15:58.608051  202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:15:58.960100  202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:15:58.960768  202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:15:58.963496  202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:15:57.169567  213781 addons.go:530] duration metric: took 8.356745232s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1228 07:15:57.170643  213781 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:15:57.172042  213781 api_server.go:141] control plane version: v1.28.0
	I1228 07:15:57.172072  213781 api_server.go:131] duration metric: took 11.395353ms to wait for apiserver health ...
	I1228 07:15:57.172082  213781 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:15:57.178707  213781 system_pods.go:59] 9 kube-system pods found
	I1228 07:15:57.178750  213781 system_pods.go:61] "coredns-5dd5756b68-bq24f" [d30162e5-4586-47ce-98f4-f59746df82ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:15:57.178760  213781 system_pods.go:61] "etcd-old-k8s-version-251758" [89bf89f6-a96d-47e0-b7ba-99f69754c84c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:15:57.178766  213781 system_pods.go:61] "kindnet-knhp5" [97f2cb7e-2299-417a-9a40-620c0419ebba] Running
	I1228 07:15:57.178774  213781 system_pods.go:61] "kube-apiserver-old-k8s-version-251758" [976ee1b8-eb95-46c4-8cdf-3694d7a984e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:15:57.178782  213781 system_pods.go:61] "kube-controller-manager-old-k8s-version-251758" [5734ad03-d3a1-480b-94a1-8af5cbecbf42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:15:57.178787  213781 system_pods.go:61] "kube-proxy-jnkj2" [28083a3b-a189-4b49-b091-6b08fbe9526e] Running
	I1228 07:15:57.178794  213781 system_pods.go:61] "kube-scheduler-old-k8s-version-251758" [ffe5fb7d-5d3c-4dd9-a4fb-ec4b68f60520] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:15:57.178810  213781 system_pods.go:61] "metrics-server-57f55c9bc5-rszdx" [bbdba1a8-0217-4b61-8f2d-b99adc87f35b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:15:57.178815  213781 system_pods.go:61] "storage-provisioner" [f0b49587-0166-44ac-bb29-8cc45ec5668d] Running
	I1228 07:15:57.178823  213781 system_pods.go:74] duration metric: took 6.735487ms to wait for pod list to return data ...
	I1228 07:15:57.178836  213781 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:15:57.184256  213781 default_sa.go:45] found service account: "default"
	I1228 07:15:57.184285  213781 default_sa.go:55] duration metric: took 5.443249ms for default service account to be created ...
	I1228 07:15:57.184297  213781 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:15:57.191666  213781 system_pods.go:86] 9 kube-system pods found
	I1228 07:15:57.191703  213781 system_pods.go:89] "coredns-5dd5756b68-bq24f" [d30162e5-4586-47ce-98f4-f59746df82ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:15:57.191714  213781 system_pods.go:89] "etcd-old-k8s-version-251758" [89bf89f6-a96d-47e0-b7ba-99f69754c84c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:15:57.191722  213781 system_pods.go:89] "kindnet-knhp5" [97f2cb7e-2299-417a-9a40-620c0419ebba] Running
	I1228 07:15:57.191730  213781 system_pods.go:89] "kube-apiserver-old-k8s-version-251758" [976ee1b8-eb95-46c4-8cdf-3694d7a984e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:15:57.191737  213781 system_pods.go:89] "kube-controller-manager-old-k8s-version-251758" [5734ad03-d3a1-480b-94a1-8af5cbecbf42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:15:57.191749  213781 system_pods.go:89] "kube-proxy-jnkj2" [28083a3b-a189-4b49-b091-6b08fbe9526e] Running
	I1228 07:15:57.191756  213781 system_pods.go:89] "kube-scheduler-old-k8s-version-251758" [ffe5fb7d-5d3c-4dd9-a4fb-ec4b68f60520] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:15:57.191763  213781 system_pods.go:89] "metrics-server-57f55c9bc5-rszdx" [bbdba1a8-0217-4b61-8f2d-b99adc87f35b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:15:57.191774  213781 system_pods.go:89] "storage-provisioner" [f0b49587-0166-44ac-bb29-8cc45ec5668d] Running
	I1228 07:15:57.191782  213781 system_pods.go:126] duration metric: took 7.479772ms to wait for k8s-apps to be running ...
	I1228 07:15:57.191796  213781 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:15:57.191845  213781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:15:57.210025  213781 system_svc.go:56] duration metric: took 18.221047ms WaitForService to wait for kubelet
	I1228 07:15:57.210054  213781 kubeadm.go:587] duration metric: took 8.39762753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:15:57.210074  213781 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:15:57.214361  213781 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:15:57.214397  213781 node_conditions.go:123] node cpu capacity is 2
	I1228 07:15:57.214410  213781 node_conditions.go:105] duration metric: took 4.330353ms to run NodePressure ...
	I1228 07:15:57.214421  213781 start.go:242] waiting for startup goroutines ...
	I1228 07:15:57.214429  213781 start.go:247] waiting for cluster config update ...
	I1228 07:15:57.214440  213781 start.go:256] writing updated cluster config ...
	I1228 07:15:57.214735  213781 ssh_runner.go:195] Run: rm -f paused
	I1228 07:15:57.229539  213781 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:15:57.234754  213781 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bq24f" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:15:59.240902  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	I1228 07:15:58.967038  202182 out.go:252]   - Booting up control plane ...
	I1228 07:15:58.967133  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:15:58.967207  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:15:58.968494  202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:15:58.990175  202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:15:58.990624  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:15:58.998239  202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:15:58.998885  202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:15:58.998948  202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:15:59.134789  202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:15:59.134903  202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1228 07:16:01.740903  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:04.241723  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:06.242872  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:08.243720  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:10.741300  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:13.240428  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:15.740512  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:17.740572  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:19.742210  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:21.742385  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:24.240077  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	W1228 07:16:26.249689  213781 pod_ready.go:104] pod "coredns-5dd5756b68-bq24f" is not "Ready", error: <nil>
	I1228 07:16:27.743605  213781 pod_ready.go:94] pod "coredns-5dd5756b68-bq24f" is "Ready"
	I1228 07:16:27.743639  213781 pod_ready.go:86] duration metric: took 30.508858315s for pod "coredns-5dd5756b68-bq24f" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.747476  213781 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.758234  213781 pod_ready.go:94] pod "etcd-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.758268  213781 pod_ready.go:86] duration metric: took 10.758661ms for pod "etcd-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.763059  213781 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.769249  213781 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.769280  213781 pod_ready.go:86] duration metric: took 6.191178ms for pod "kube-apiserver-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.772232  213781 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:27.941288  213781 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-251758" is "Ready"
	I1228 07:16:27.941367  213781 pod_ready.go:86] duration metric: took 169.098666ms for pod "kube-controller-manager-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.139363  213781 pod_ready.go:83] waiting for pod "kube-proxy-jnkj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.538967  213781 pod_ready.go:94] pod "kube-proxy-jnkj2" is "Ready"
	I1228 07:16:28.538997  213781 pod_ready.go:86] duration metric: took 399.606289ms for pod "kube-proxy-jnkj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:28.739792  213781 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:29.139314  213781 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-251758" is "Ready"
	I1228 07:16:29.139341  213781 pod_ready.go:86] duration metric: took 399.522284ms for pod "kube-scheduler-old-k8s-version-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:16:29.139353  213781 pod_ready.go:40] duration metric: took 31.909778772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:16:29.192451  213781 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1228 07:16:29.195826  213781 out.go:203] 
	W1228 07:16:29.198720  213781 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 07:16:29.203006  213781 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:16:29.206228  213781 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-251758" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b37d57a2c64f3       ba04bb24b9575       7 seconds ago        Running             storage-provisioner       2                   788083d028482       storage-provisioner                              kube-system
	fcf6d49dfccc3       20b332c9a70d8       36 seconds ago       Running             kubernetes-dashboard      0                   372ac697592fd       kubernetes-dashboard-8694d4445c-p26th            kubernetes-dashboard
	3bca61fe93233       1611cd07b61d5       50 seconds ago       Running             busybox                   1                   dba3aaf2b4e88       busybox                                          default
	249571e18828c       97e04611ad434       50 seconds ago       Running             coredns                   1                   e8b7259a0ff29       coredns-5dd5756b68-bq24f                         kube-system
	2ffa335da4fa4       c96ee3c174987       51 seconds ago       Running             kindnet-cni               1                   09e7329936a53       kindnet-knhp5                                    kube-system
	03dc4b502fb77       ba04bb24b9575       51 seconds ago       Exited              storage-provisioner       1                   788083d028482       storage-provisioner                              kube-system
	24c5d7076c690       940f54a5bcae9       51 seconds ago       Running             kube-proxy                1                   5741545cfc234       kube-proxy-jnkj2                                 kube-system
	bae54f9e55d77       762dce4090c5f       57 seconds ago       Running             kube-scheduler            1                   8b6a869415018       kube-scheduler-old-k8s-version-251758            kube-system
	7883e471477c5       00543d2fe5d71       57 seconds ago       Running             kube-apiserver            1                   87303144defb7       kube-apiserver-old-k8s-version-251758            kube-system
	436face63ec1e       9cdd6470f48c8       57 seconds ago       Running             etcd                      1                   7c011e9cf9961       etcd-old-k8s-version-251758                      kube-system
	4e9273c710366       46cc66ccc7c19       57 seconds ago       Running             kube-controller-manager   1                   a923b1c11ea23       kube-controller-manager-old-k8s-version-251758   kube-system
	35509d51c1377       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   72b4b575f8389       busybox                                          default
	9699fef3994e2       97e04611ad434       About a minute ago   Exited              coredns                   0                   bdc3799df2399       coredns-5dd5756b68-bq24f                         kube-system
	434574137bc77       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   e3fdf61e26eaa       kindnet-knhp5                                    kube-system
	f81a4cf68093a       940f54a5bcae9       About a minute ago   Exited              kube-proxy                0                   b4d3e517cba7d       kube-proxy-jnkj2                                 kube-system
	0eef9c3bcfb97       46cc66ccc7c19       2 minutes ago        Exited              kube-controller-manager   0                   21326c15976df       kube-controller-manager-old-k8s-version-251758   kube-system
	a5ecfaf90d03b       762dce4090c5f       2 minutes ago        Exited              kube-scheduler            0                   6efc2cf694724       kube-scheduler-old-k8s-version-251758            kube-system
	6a28c04f07b81       00543d2fe5d71       2 minutes ago        Exited              kube-apiserver            0                   f847647f1be1c       kube-apiserver-old-k8s-version-251758            kube-system
	d90bfeb57827c       9cdd6470f48c8       2 minutes ago        Exited              etcd                      0                   62d9b32efb3e3       etcd-old-k8s-version-251758                      kube-system
	
	
	==> containerd <==
	Dec 28 07:16:40 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:40.227266946Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.920938488Z" level=info msg="StopPodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.921498633Z" level=info msg="TearDown network for sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.921556496Z" level=info msg="StopPodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.928593113Z" level=info msg="RemovePodSandbox for \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.928638397Z" level=info msg="Forcibly stopping sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.929088281Z" level=info msg="TearDown network for sandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.932659020Z" level=info msg="Ensure that sandbox c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5 in task-service has been cleanup successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.942214200Z" level=info msg="RemovePodSandbox \"c60435de65e58e7d09d4f84e9f14efffc4734511c6863300dec0d7576ba32ca5\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.943787728Z" level=info msg="StopPodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.968756814Z" level=info msg="TearDown network for sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.968822111Z" level=info msg="StopPodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" returns successfully"
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.969626755Z" level=info msg="RemovePodSandbox for \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:42 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:42.969761771Z" level=info msg="Forcibly stopping sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\""
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.014644629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.079744670Z" level=info msg="TearDown network for sandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.084113940Z" level=info msg="Ensure that sandbox cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09 in task-service has been cleanup successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.101489637Z" level=info msg="RemovePodSandbox \"cd14f93d70e064fc1a1c11c46d332f8bba3dab79025c04fc54f883b9fb197f09\" returns successfully"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.907448351Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.912733281Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.915856001Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.915977060Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:16:43 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:43.920755712Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:16:44 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:44.087847590Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:16:44 old-k8s-version-251758 containerd[556]: time="2025-12-28T07:16:44.087878605Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-251758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-251758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-251758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_14_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:14:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-251758
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:14:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:16:43 +0000   Sun, 28 Dec 2025 07:16:43 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-251758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                81009bb2-5b7c-4d13-9b49-9887e473afcc
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-bq24f                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-251758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-knhp5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-251758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-251758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-jnkj2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-251758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 metrics-server-57f55c9bc5-rszdx                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         77s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-8j26z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p26th             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-251758 event: Registered Node old-k8s-version-251758 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-251758 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)  kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-251758 event: Registered Node old-k8s-version-251758 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet          Node old-k8s-version-251758 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node old-k8s-version-251758 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:16:47 up 59 min,  0 user,  load average: 1.64, 1.56, 1.73
	Linux old-k8s-version-251758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630330    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/97f2cb7e-2299-417a-9a40-620c0419ebba-cni-cfg\") pod \"kindnet-knhp5\" (UID: \"97f2cb7e-2299-417a-9a40-620c0419ebba\") " pod="kube-system/kindnet-knhp5"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630384    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-kubeconfig\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630411    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-etc-ca-certificates\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630502    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/84191f4f25506808afc74712c42b7b22-etcd-certs\") pod \"etcd-old-k8s-version-251758\" (UID: \"84191f4f25506808afc74712c42b7b22\") " pod="kube-system/etcd-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630527    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-flexvolume-dir\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630614    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-usr-local-share-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630643    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28083a3b-a189-4b49-b091-6b08fbe9526e-xtables-lock\") pod \"kube-proxy-jnkj2\" (UID: \"28083a3b-a189-4b49-b091-6b08fbe9526e\") " pod="kube-system/kube-proxy-jnkj2"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630700    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aebf01ec7e15793f283ae1566d6edf2-kubeconfig\") pod \"kube-scheduler-old-k8s-version-251758\" (UID: \"5aebf01ec7e15793f283ae1566d6edf2\") " pod="kube-system/kube-scheduler-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630723    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28083a3b-a189-4b49-b091-6b08fbe9526e-lib-modules\") pod \"kube-proxy-jnkj2\" (UID: \"28083a3b-a189-4b49-b091-6b08fbe9526e\") " pod="kube-system/kube-proxy-jnkj2"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630747    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-etc-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630783    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/84191f4f25506808afc74712c42b7b22-etcd-data\") pod \"etcd-old-k8s-version-251758\" (UID: \"84191f4f25506808afc74712c42b7b22\") " pod="kube-system/etcd-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630805    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-k8s-certs\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630828    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb99d336f5dd6478ae40b7c20acf70a0-usr-share-ca-certificates\") pod \"kube-apiserver-old-k8s-version-251758\" (UID: \"fb99d336f5dd6478ae40b7c20acf70a0\") " pod="kube-system/kube-apiserver-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630851    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-ca-certs\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630874    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-k8s-certs\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630905    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25804a855d9cc873ef6b97be8620af76-usr-share-ca-certificates\") pod \"kube-controller-manager-old-k8s-version-251758\" (UID: \"25804a855d9cc873ef6b97be8620af76\") " pod="kube-system/kube-controller-manager-old-k8s-version-251758"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: I1228 07:16:43.630932    2399 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97f2cb7e-2299-417a-9a40-620c0419ebba-lib-modules\") pod \"kindnet-knhp5\" (UID: \"97f2cb7e-2299-417a-9a40-620c0419ebba\") " pod="kube-system/kindnet-knhp5"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916232    2399 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916279    2399 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916578    2399 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9l9sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rszdx_kube-system(bbdba1a8-0217-4b61-8f2d-b99adc87f35b): ErrImagePull: failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 28 07:16:43 old-k8s-version-251758 kubelet[2399]: E1228 07:16:43.916627    2399 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rszdx" podUID="bbdba1a8-0217-4b61-8f2d-b99adc87f35b"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088231    2399 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088280    2399 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088382    2399 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dzwtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-8j26z_kubernetes-dashboard(e5368324-cefa-4185-b286-ce55e50b4945): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image "registry.k8s.io/echoserver:1.4": not implemented: media type "application/vnd.docker.distribution.manifest.v1+prettyjws" is no longer supported since containerd v2.1, please rebuild the image as "application/vnd.docker.distribution.ma
nifest.v2+json" or "application/vnd.oci.image.manifest.v1+json"
	Dec 28 07:16:44 old-k8s-version-251758 kubelet[2399]: E1228 07:16:44.088427    2399 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8j26z" podUID="e5368324-cefa-4185-b286-ce55e50b4945"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-251758 -n old-k8s-version-251758
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-251758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z: exit status 1 (81.699879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rszdx" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-8j26z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-251758 describe pod metrics-server-57f55c9bc5-rszdx dashboard-metrics-scraper-5f989dc9cf-8j26z: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-863373 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863373 -n no-preload-863373
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863373 -n no-preload-863373: exit status 2 (344.110533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863373 -n no-preload-863373
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863373 -n no-preload-863373: exit status 2 (354.457497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-863373 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863373 -n no-preload-863373
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863373 -n no-preload-863373
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-863373
helpers_test.go:244: (dbg) docker inspect no-preload-863373:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc",
	        "Created": "2025-12-28T07:16:51.565098217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:18:04.489231138Z",
	            "FinishedAt": "2025-12-28T07:18:03.671684015Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/hostname",
	        "HostsPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/hosts",
	        "LogPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc-json.log",
	        "Name": "/no-preload-863373",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-863373:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-863373",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc",
	                "LowerDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-863373",
	                "Source": "/var/lib/docker/volumes/no-preload-863373/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-863373",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-863373",
	                "name.minikube.sigs.k8s.io": "no-preload-863373",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dabbc96817aa32381878570980099406dffffef47a5c4082d7526166d5877980",
	            "SandboxKey": "/var/run/docker/netns/dabbc96817aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-863373": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:24:7e:9a:ff:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45333eb9e7892b033e53341f3e0ac61e624982205e2c122d87a85b3bd8478a9f",
	                    "EndpointID": "21b07798d1a4925732ad5b05c1a95023c466f946bc6f072d1a57e6e6810533ee",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-863373",
	                        "0704657a2bc1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863373 -n no-preload-863373
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-863373 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │                     │
	│ ssh     │ force-systemd-env-782848 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ delete  │ -p force-systemd-env-782848                                                                                                                                                                                                                         │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ start   │ -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p cert-options-913529                                                                                                                                                                                                                              │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ stop    │ -p old-k8s-version-251758 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
	│ image   │ old-k8s-version-251758 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ pause   │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
	│ stop    │ -p no-preload-863373 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
	│ addons  │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ image   │ no-preload-863373 image list --format=json                                                                                                                                                                                                          │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ pause   │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ unpause │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:18:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:18:04.210105  223335 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:18:04.210218  223335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:18:04.210229  223335 out.go:374] Setting ErrFile to fd 2...
	I1228 07:18:04.210235  223335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:18:04.210485  223335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:18:04.210833  223335 out.go:368] Setting JSON to false
	I1228 07:18:04.211668  223335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3634,"bootTime":1766902650,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:18:04.211737  223335 start.go:143] virtualization:  
	I1228 07:18:04.214696  223335 out.go:179] * [no-preload-863373] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:18:04.218567  223335 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:18:04.218676  223335 notify.go:221] Checking for updates...
	I1228 07:18:04.224562  223335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:18:04.227469  223335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:04.230365  223335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:18:04.233196  223335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:18:04.236017  223335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:18:04.239421  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:04.239974  223335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:18:04.273577  223335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:18:04.273686  223335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:18:04.335584  223335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:18:04.326234863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:18:04.335691  223335 docker.go:319] overlay module found
	I1228 07:18:04.338880  223335 out.go:179] * Using the docker driver based on existing profile
	I1228 07:18:04.341707  223335 start.go:309] selected driver: docker
	I1228 07:18:04.341730  223335 start.go:928] validating driver "docker" against &{Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:04.341838  223335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:18:04.342588  223335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:18:04.401750  223335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:18:04.390907262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:18:04.402087  223335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:18:04.402124  223335 cni.go:84] Creating CNI manager for ""
	I1228 07:18:04.402185  223335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:18:04.402232  223335 start.go:353] cluster config:
	{Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:04.405461  223335 out.go:179] * Starting "no-preload-863373" primary control-plane node in "no-preload-863373" cluster
	I1228 07:18:04.408395  223335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:18:04.411369  223335 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:18:04.414099  223335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:18:04.414170  223335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:18:04.414245  223335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/config.json ...
	I1228 07:18:04.414544  223335 cache.go:107] acquiring lock: {Name:mk675ae57a43ad1dcd013ca7bfeabdb5cfff3e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414636  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 07:18:04.414648  223335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.851µs
	I1228 07:18:04.414662  223335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 07:18:04.414674  223335 cache.go:107] acquiring lock: {Name:mk25211b1bcacda03f06c284c2f1d87c293f500d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414709  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 07:18:04.414719  223335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 46.917µs
	I1228 07:18:04.414726  223335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 07:18:04.414735  223335 cache.go:107] acquiring lock: {Name:mkecaafbf95bf5e637e02091f1e57be37fd26cd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414766  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 07:18:04.414776  223335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 41.806µs
	I1228 07:18:04.414783  223335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 07:18:04.414792  223335 cache.go:107] acquiring lock: {Name:mk4aac8221878cba9f26c204ecfa3d180ffd3c99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414824  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 07:18:04.414833  223335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 42.519µs
	I1228 07:18:04.414847  223335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 07:18:04.414857  223335 cache.go:107] acquiring lock: {Name:mk95a06d630633758c0fce1c69bbc91e5f9c1763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414889  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 07:18:04.414899  223335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 43.184µs
	I1228 07:18:04.414906  223335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 07:18:04.414916  223335 cache.go:107] acquiring lock: {Name:mk8bad262aa0eb5c84fb05bde99edcd03a9862ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414948  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1228 07:18:04.414957  223335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.027µs
	I1228 07:18:04.414963  223335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 07:18:04.414986  223335 cache.go:107] acquiring lock: {Name:mke834d38136a3f1bc976929fc24bcc96d745a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.415019  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 07:18:04.415029  223335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 44.373µs
	I1228 07:18:04.415035  223335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 07:18:04.415044  223335 cache.go:107] acquiring lock: {Name:mk167fc96a63a95f47f0169364d746a7806993b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.415076  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 07:18:04.415085  223335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 41.772µs
	I1228 07:18:04.415091  223335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 07:18:04.415097  223335 cache.go:87] Successfully saved all images to host disk.
	I1228 07:18:04.434351  223335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:18:04.434371  223335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:18:04.434386  223335 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:18:04.434414  223335 start.go:360] acquireMachinesLock for no-preload-863373: {Name:mk40422d73ffad526263a6e1c84f556b25bc76b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.434487  223335 start.go:364] duration metric: took 58.413µs to acquireMachinesLock for "no-preload-863373"
	I1228 07:18:04.434508  223335 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:18:04.434513  223335 fix.go:54] fixHost starting: 
	I1228 07:18:04.434773  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:04.451680  223335 fix.go:112] recreateIfNeeded on no-preload-863373: state=Stopped err=<nil>
	W1228 07:18:04.451716  223335 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:18:04.455032  223335 out.go:252] * Restarting existing docker container for "no-preload-863373" ...
	I1228 07:18:04.455154  223335 cli_runner.go:164] Run: docker start no-preload-863373
	I1228 07:18:04.724808  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:04.758189  223335 kic.go:430] container "no-preload-863373" state is running.
	I1228 07:18:04.758585  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:04.789291  223335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/config.json ...
	I1228 07:18:04.789520  223335 machine.go:94] provisionDockerMachine start ...
	I1228 07:18:04.789580  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:04.814681  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:04.815001  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:04.815009  223335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:18:04.815694  223335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:18:07.956090  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-863373
	
	I1228 07:18:07.956119  223335 ubuntu.go:182] provisioning hostname "no-preload-863373"
	I1228 07:18:07.956185  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:07.974423  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:07.974742  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:07.974758  223335 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-863373 && echo "no-preload-863373" | sudo tee /etc/hostname
	I1228 07:18:08.120237  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-863373
	
	I1228 07:18:08.120344  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.142670  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:08.142997  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:08.143022  223335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-863373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-863373/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-863373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:18:08.280890  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:18:08.280929  223335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:18:08.280958  223335 ubuntu.go:190] setting up certificates
	I1228 07:18:08.280967  223335 provision.go:84] configureAuth start
	I1228 07:18:08.281034  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:08.299145  223335 provision.go:143] copyHostCerts
	I1228 07:18:08.299216  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:18:08.299239  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:18:08.299321  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:18:08.299435  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:18:08.299446  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:18:08.299476  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:18:08.299547  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:18:08.299557  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:18:08.299583  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:18:08.299646  223335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.no-preload-863373 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-863373]
	I1228 07:18:08.553781  223335 provision.go:177] copyRemoteCerts
	I1228 07:18:08.553851  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:18:08.553895  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.571197  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.668183  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:18:08.686448  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:18:08.704354  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:18:08.721904  223335 provision.go:87] duration metric: took 440.912445ms to configureAuth
	I1228 07:18:08.721933  223335 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:18:08.722177  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:08.722195  223335 machine.go:97] duration metric: took 3.932666405s to provisionDockerMachine
	I1228 07:18:08.722214  223335 start.go:293] postStartSetup for "no-preload-863373" (driver="docker")
	I1228 07:18:08.722229  223335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:18:08.722301  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:18:08.722363  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.739998  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.840335  223335 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:18:08.843659  223335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:18:08.843690  223335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:18:08.843702  223335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:18:08.843775  223335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:18:08.843862  223335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:18:08.843964  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:18:08.851498  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:18:08.868737  223335 start.go:296] duration metric: took 146.503384ms for postStartSetup
	I1228 07:18:08.868816  223335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:18:08.868857  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.886416  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.985659  223335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:18:08.990529  223335 fix.go:56] duration metric: took 4.556008816s for fixHost
	I1228 07:18:08.990557  223335 start.go:83] releasing machines lock for "no-preload-863373", held for 4.556060321s
	I1228 07:18:08.990624  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:09.009999  223335 ssh_runner.go:195] Run: cat /version.json
	I1228 07:18:09.010030  223335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:18:09.010056  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:09.010085  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:09.035820  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:09.038178  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:09.226446  223335 ssh_runner.go:195] Run: systemctl --version
	I1228 07:18:09.233239  223335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:18:09.237834  223335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:18:09.237928  223335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:18:09.245528  223335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:18:09.245549  223335 start.go:496] detecting cgroup driver to use...
	I1228 07:18:09.245579  223335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:18:09.245629  223335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:18:09.261516  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:18:09.277164  223335 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:18:09.277260  223335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:18:09.292696  223335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:18:09.306633  223335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:18:09.412475  223335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:18:09.526698  223335 docker.go:234] disabling docker service ...
	I1228 07:18:09.526823  223335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:18:09.541636  223335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:18:09.554690  223335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:18:09.661341  223335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:18:09.780561  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:18:09.793330  223335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:18:09.806806  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:18:09.815963  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:18:09.824985  223335 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:18:09.825055  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:18:09.834061  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:18:09.843279  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:18:09.852260  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:18:09.861421  223335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:18:09.869678  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:18:09.878735  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:18:09.888119  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:18:09.897144  223335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:18:09.904614  223335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:18:09.912081  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:10.031060  223335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:18:10.177228  223335 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:18:10.177293  223335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:18:10.180839  223335 start.go:574] Will wait 60s for crictl version
	I1228 07:18:10.180956  223335 ssh_runner.go:195] Run: which crictl
	I1228 07:18:10.184302  223335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:18:10.210425  223335 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:18:10.210516  223335 ssh_runner.go:195] Run: containerd --version
	I1228 07:18:10.229509  223335 ssh_runner.go:195] Run: containerd --version
	I1228 07:18:10.253636  223335 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:18:10.256638  223335 cli_runner.go:164] Run: docker network inspect no-preload-863373 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:18:10.272952  223335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:18:10.276871  223335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:18:10.286598  223335 kubeadm.go:884] updating cluster {Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:18:10.286729  223335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:18:10.286788  223335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:18:10.310966  223335 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:18:10.310993  223335 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:18:10.311002  223335 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1228 07:18:10.311103  223335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-863373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:18:10.311170  223335 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:18:10.338333  223335 cni.go:84] Creating CNI manager for ""
	I1228 07:18:10.338357  223335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:18:10.338376  223335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:18:10.338398  223335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-863373 NodeName:no-preload-863373 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:18:10.338526  223335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-863373"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:18:10.338595  223335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:18:10.346162  223335 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:18:10.346230  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:18:10.353752  223335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1228 07:18:10.366988  223335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:18:10.379587  223335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
	I1228 07:18:10.392268  223335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:18:10.395890  223335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:18:10.406107  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:10.514389  223335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:18:10.531222  223335 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373 for IP: 192.168.76.2
	I1228 07:18:10.531240  223335 certs.go:195] generating shared ca certs ...
	I1228 07:18:10.531257  223335 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:10.531406  223335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:18:10.531460  223335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:18:10.531473  223335 certs.go:257] generating profile certs ...
	I1228 07:18:10.531558  223335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.key
	I1228 07:18:10.531631  223335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.key.770dd85f
	I1228 07:18:10.531674  223335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.key
	I1228 07:18:10.531783  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:18:10.531819  223335 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:18:10.531831  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:18:10.531861  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:18:10.531889  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:18:10.531917  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:18:10.531970  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:18:10.532649  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:18:10.555579  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:18:10.575470  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:18:10.594853  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:18:10.615158  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:18:10.634566  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:18:10.657760  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:18:10.677948  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:18:10.713366  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:18:10.776175  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:18:10.797138  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:18:10.815554  223335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:18:10.829954  223335 ssh_runner.go:195] Run: openssl version
	I1228 07:18:10.835921  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.843220  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:18:10.850734  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.854646  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.854732  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.895519  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:18:10.902867  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.910155  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:18:10.917712  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.921332  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.921409  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.963602  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:18:10.971617  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.980013  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:18:10.987604  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.991305  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.991377  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:18:11.033513  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:18:11.040944  223335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:18:11.044660  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:18:11.086532  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:18:11.127993  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:18:11.171766  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:18:11.221651  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:18:11.271022  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:18:11.345458  223335 kubeadm.go:401] StartCluster: {Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:11.345610  223335 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:18:11.388593  223335 cri.go:83] list returned 3 containers
	I1228 07:18:11.388673  223335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:18:11.404310  223335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:18:11.404333  223335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:18:11.404394  223335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:18:11.416870  223335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:18:11.417319  223335 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-863373" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:11.417428  223335 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-863373" cluster setting kubeconfig missing "no-preload-863373" context setting]
	I1228 07:18:11.417720  223335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.419282  223335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:18:11.430254  223335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 07:18:11.430295  223335 kubeadm.go:602] duration metric: took 25.955678ms to restartPrimaryControlPlane
	I1228 07:18:11.430305  223335 kubeadm.go:403] duration metric: took 84.858959ms to StartCluster
	I1228 07:18:11.430320  223335 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.430386  223335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:11.431005  223335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.431225  223335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:18:11.431623  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:11.431618  223335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:18:11.431782  223335 addons.go:70] Setting storage-provisioner=true in profile "no-preload-863373"
	I1228 07:18:11.431798  223335 addons.go:239] Setting addon storage-provisioner=true in "no-preload-863373"
	W1228 07:18:11.431805  223335 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:18:11.431834  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.432329  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.432533  223335 addons.go:70] Setting default-storageclass=true in profile "no-preload-863373"
	I1228 07:18:11.432574  223335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-863373"
	I1228 07:18:11.432813  223335 addons.go:70] Setting metrics-server=true in profile "no-preload-863373"
	I1228 07:18:11.432836  223335 addons.go:239] Setting addon metrics-server=true in "no-preload-863373"
	W1228 07:18:11.432844  223335 addons.go:248] addon metrics-server should already be in state true
	I1228 07:18:11.432874  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.432988  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.433314  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.436919  223335 addons.go:70] Setting dashboard=true in profile "no-preload-863373"
	I1228 07:18:11.436951  223335 addons.go:239] Setting addon dashboard=true in "no-preload-863373"
	W1228 07:18:11.436960  223335 addons.go:248] addon dashboard should already be in state true
	I1228 07:18:11.437154  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.437066  223335 out.go:179] * Verifying Kubernetes components...
	I1228 07:18:11.439312  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.448626  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:11.517101  223335 addons.go:239] Setting addon default-storageclass=true in "no-preload-863373"
	W1228 07:18:11.517131  223335 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:18:11.517164  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.517652  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.526533  223335 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:18:11.529975  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:18:11.530001  223335 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:18:11.530067  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.534838  223335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:18:11.537689  223335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:18:11.540577  223335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:18:11.540688  223335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:18:11.540704  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:18:11.540766  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.543381  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:18:11.543411  223335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:18:11.543476  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.574365  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.599988  223335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:18:11.600011  223335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:18:11.600070  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.630274  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.641714  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.659574  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.769637  223335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:18:11.829751  223335 node_ready.go:35] waiting up to 6m0s for node "no-preload-863373" to be "Ready" ...
	I1228 07:18:11.845571  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:18:11.845632  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:18:11.885019  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:18:11.885040  223335 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:18:11.941136  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:18:11.941198  223335 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:18:11.949833  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:18:11.981069  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:18:12.017233  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:18:12.017299  223335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:18:12.027357  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:18:12.248572  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:18:12.248637  223335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:18:12.440818  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:18:12.440892  223335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:18:12.577104  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:18:12.577181  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:18:12.620888  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:18:12.620965  223335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:18:12.713946  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:18:12.714010  223335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:18:12.745948  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:18:12.746025  223335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:18:12.773897  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:18:12.773960  223335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:18:12.817557  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:18:12.817633  223335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:18:12.866259  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:18:15.135447  223335 node_ready.go:49] node "no-preload-863373" is "Ready"
	I1228 07:18:15.135480  223335 node_ready.go:38] duration metric: took 3.3056549s for node "no-preload-863373" to be "Ready" ...
	I1228 07:18:15.135495  223335 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:18:15.135555  223335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:18:15.462247  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.512326263s)
	I1228 07:18:17.640276  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.659136774s)
	I1228 07:18:17.640365  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.612938994s)
	I1228 07:18:17.640375  223335 addons.go:495] Verifying addon metrics-server=true in "no-preload-863373"
	I1228 07:18:17.640533  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.77419249s)
	I1228 07:18:17.640712  223335 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.505139133s)
	I1228 07:18:17.640735  223335 api_server.go:72] duration metric: took 6.209481345s to wait for apiserver process to appear ...
	I1228 07:18:17.640742  223335 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:18:17.640758  223335 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:18:17.644240  223335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-863373 addons enable metrics-server
	
	I1228 07:18:17.647231  223335 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1228 07:18:17.649152  223335 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:18:17.650313  223335 api_server.go:141] control plane version: v1.35.0
	I1228 07:18:17.650338  223335 api_server.go:131] duration metric: took 9.589428ms to wait for apiserver health ...
	I1228 07:18:17.650348  223335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:18:17.650567  223335 addons.go:530] duration metric: took 6.218958648s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1228 07:18:17.654208  223335 system_pods.go:59] 9 kube-system pods found
	I1228 07:18:17.654243  223335 system_pods.go:61] "coredns-7d764666f9-j2lwq" [7ed3a1ae-d5ec-4274-9264-52845d3e00a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:18:17.654252  223335 system_pods.go:61] "etcd-no-preload-863373" [bb47b3ac-c648-44af-9179-e21fd315a5f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:18:17.654262  223335 system_pods.go:61] "kindnet-mm548" [061dcf01-8219-4abc-93df-5e4c3392c108] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:18:17.654269  223335 system_pods.go:61] "kube-apiserver-no-preload-863373" [27e3f644-63c0-43ad-bd3e-f6e49dd52278] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:18:17.654276  223335 system_pods.go:61] "kube-controller-manager-no-preload-863373" [039b2227-a1e8-4e1f-bed3-7b8c943fd581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:18:17.654283  223335 system_pods.go:61] "kube-proxy-t6l8g" [3e82261b-19a3-458f-b1f2-1d690115afc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:18:17.654289  223335 system_pods.go:61] "kube-scheduler-no-preload-863373" [b41aab7f-79af-4f32-ac5b-3cf2ad737dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:18:17.654296  223335 system_pods.go:61] "metrics-server-5d785b57d4-25rzl" [ffcb8454-7d9d-4854-9e0f-57c3468a22d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:18:17.654313  223335 system_pods.go:61] "storage-provisioner" [5e47dc59-09d2-4fc3-951f-e140d54cdab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:18:17.654320  223335 system_pods.go:74] duration metric: took 3.965836ms to wait for pod list to return data ...
	I1228 07:18:17.654327  223335 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:18:17.656798  223335 default_sa.go:45] found service account: "default"
	I1228 07:18:17.656825  223335 default_sa.go:55] duration metric: took 2.491198ms for default service account to be created ...
	I1228 07:18:17.656834  223335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:18:17.659570  223335 system_pods.go:86] 9 kube-system pods found
	I1228 07:18:17.659605  223335 system_pods.go:89] "coredns-7d764666f9-j2lwq" [7ed3a1ae-d5ec-4274-9264-52845d3e00a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:18:17.659615  223335 system_pods.go:89] "etcd-no-preload-863373" [bb47b3ac-c648-44af-9179-e21fd315a5f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:18:17.659624  223335 system_pods.go:89] "kindnet-mm548" [061dcf01-8219-4abc-93df-5e4c3392c108] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:18:17.659632  223335 system_pods.go:89] "kube-apiserver-no-preload-863373" [27e3f644-63c0-43ad-bd3e-f6e49dd52278] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:18:17.659644  223335 system_pods.go:89] "kube-controller-manager-no-preload-863373" [039b2227-a1e8-4e1f-bed3-7b8c943fd581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:18:17.659651  223335 system_pods.go:89] "kube-proxy-t6l8g" [3e82261b-19a3-458f-b1f2-1d690115afc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:18:17.659666  223335 system_pods.go:89] "kube-scheduler-no-preload-863373" [b41aab7f-79af-4f32-ac5b-3cf2ad737dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:18:17.659673  223335 system_pods.go:89] "metrics-server-5d785b57d4-25rzl" [ffcb8454-7d9d-4854-9e0f-57c3468a22d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:18:17.659680  223335 system_pods.go:89] "storage-provisioner" [5e47dc59-09d2-4fc3-951f-e140d54cdab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:18:17.659697  223335 system_pods.go:126] duration metric: took 2.85724ms to wait for k8s-apps to be running ...
	I1228 07:18:17.659705  223335 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:18:17.659764  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:18:17.672615  223335 system_svc.go:56] duration metric: took 12.901712ms WaitForService to wait for kubelet
	I1228 07:18:17.672642  223335 kubeadm.go:587] duration metric: took 6.241387509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:18:17.672659  223335 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:18:17.675572  223335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:18:17.675605  223335 node_conditions.go:123] node cpu capacity is 2
	I1228 07:18:17.675618  223335 node_conditions.go:105] duration metric: took 2.953865ms to run NodePressure ...
	I1228 07:18:17.675649  223335 start.go:242] waiting for startup goroutines ...
	I1228 07:18:17.675663  223335 start.go:247] waiting for cluster config update ...
	I1228 07:18:17.675675  223335 start.go:256] writing updated cluster config ...
	I1228 07:18:17.675964  223335 ssh_runner.go:195] Run: rm -f paused
	I1228 07:18:17.679665  223335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:18:17.682783  223335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-j2lwq" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:18:19.689383  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:22.188555  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:24.688642  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:26.688757  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:29.188106  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:31.188312  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:33.188567  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:35.188655  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:37.188761  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:39.688883  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:42.189276  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:44.189644  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:46.688703  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:49.188146  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:51.190791  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	I1228 07:18:53.196638  223335 pod_ready.go:94] pod "coredns-7d764666f9-j2lwq" is "Ready"
	I1228 07:18:53.196670  223335 pod_ready.go:86] duration metric: took 35.513860449s for pod "coredns-7d764666f9-j2lwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.199594  223335 pod_ready.go:83] waiting for pod "etcd-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.209072  223335 pod_ready.go:94] pod "etcd-no-preload-863373" is "Ready"
	I1228 07:18:53.209102  223335 pod_ready.go:86] duration metric: took 9.481594ms for pod "etcd-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.211391  223335 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.216639  223335 pod_ready.go:94] pod "kube-apiserver-no-preload-863373" is "Ready"
	I1228 07:18:53.216709  223335 pod_ready.go:86] duration metric: took 5.287801ms for pod "kube-apiserver-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.219468  223335 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.386920  223335 pod_ready.go:94] pod "kube-controller-manager-no-preload-863373" is "Ready"
	I1228 07:18:53.386949  223335 pod_ready.go:86] duration metric: took 167.456125ms for pod "kube-controller-manager-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.587107  223335 pod_ready.go:83] waiting for pod "kube-proxy-t6l8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.986661  223335 pod_ready.go:94] pod "kube-proxy-t6l8g" is "Ready"
	I1228 07:18:53.986687  223335 pod_ready.go:86] duration metric: took 399.548208ms for pod "kube-proxy-t6l8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.187166  223335 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.586352  223335 pod_ready.go:94] pod "kube-scheduler-no-preload-863373" is "Ready"
	I1228 07:18:54.586384  223335 pod_ready.go:86] duration metric: took 399.153816ms for pod "kube-scheduler-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.586396  223335 pod_ready.go:40] duration metric: took 36.90666167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:18:54.637161  223335 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1228 07:18:54.640314  223335 out.go:203] 
	W1228 07:18:54.643220  223335 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1228 07:18:54.646050  223335 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:18:54.648843  223335 out.go:179] * Done! kubectl is now configured to use "no-preload-863373" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cd82a9b6bb7ca       66749159455b3       10 seconds ago       Running             storage-provisioner       2                   aa85ec172a655       storage-provisioner                         kube-system
	e8498ba6c7fd2       20b332c9a70d8       47 seconds ago       Running             kubernetes-dashboard      0                   45054230490ed       kubernetes-dashboard-b84665fb8-5gjbr        kubernetes-dashboard
	1f6698edd2aad       e08f4d9d2e6ed       53 seconds ago       Running             coredns                   1                   fa30d8b38b1b5       coredns-7d764666f9-j2lwq                    kube-system
	8a72355eb61b1       1611cd07b61d5       53 seconds ago       Running             busybox                   1                   ea4a79b855665       busybox                                     default
	d903784e30545       66749159455b3       53 seconds ago       Exited              storage-provisioner       1                   aa85ec172a655       storage-provisioner                         kube-system
	17028f4a8bdf4       de369f46c2ff5       53 seconds ago       Running             kube-proxy                1                   7953d967b9477       kube-proxy-t6l8g                            kube-system
	7a0af7c47c84a       c96ee3c174987       53 seconds ago       Running             kindnet-cni               1                   8e019971f7b5d       kindnet-mm548                               kube-system
	ad6e6187d14d1       271e49a0ebc56       58 seconds ago       Running             etcd                      1                   ce2164338b232       etcd-no-preload-863373                      kube-system
	99dd45648c833       88898f1d1a62a       58 seconds ago       Running             kube-controller-manager   1                   aa7531590caad       kube-controller-manager-no-preload-863373   kube-system
	615dec4099747       c3fcf259c473a       58 seconds ago       Running             kube-apiserver            1                   60a7a579e1b57       kube-apiserver-no-preload-863373            kube-system
	adce0e91e1531       ddc8422d4d35a       58 seconds ago       Running             kube-scheduler            1                   ac86e3248fdb4       kube-scheduler-no-preload-863373            kube-system
	1eabe3bb41409       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   da452e3bf4b46       busybox                                     default
	1fbaac41ada66       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   72c046ad534ee       coredns-7d764666f9-j2lwq                    kube-system
	b2e44eed719fa       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   e27b756f4c142       kindnet-mm548                               kube-system
	b58d0fb33e410       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   f2edb88c5130e       kube-proxy-t6l8g                            kube-system
	7232b6de7b7f1       88898f1d1a62a       About a minute ago   Exited              kube-controller-manager   0                   30c3c1a417aee       kube-controller-manager-no-preload-863373   kube-system
	5273cd83fffd8       ddc8422d4d35a       About a minute ago   Exited              kube-scheduler            0                   f346362e15fea       kube-scheduler-no-preload-863373            kube-system
	b5aa7e5bff890       271e49a0ebc56       About a minute ago   Exited              etcd                      0                   1d16241ac7917       etcd-no-preload-863373                      kube-system
	4c9946cb0eff6       c3fcf259c473a       About a minute ago   Exited              kube-apiserver            0                   7fd19e0d00e64       kube-apiserver-no-preload-863373            kube-system
	
	
	==> containerd <==
	Dec 28 07:18:59 no-preload-863373 containerd[556]: time="2025-12-28T07:18:59.783002200Z" level=info msg="StartContainer for \"cd82a9b6bb7ca8da371d784b05ef58671be745315dafe8d5afdbec56bcab4ba2\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.493487405Z" level=info msg="StopPodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.493996447Z" level=info msg="TearDown network for sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.494039090Z" level=info msg="StopPodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511408935Z" level=info msg="RemovePodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511458453Z" level=info msg="Forcibly stopping sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511901296Z" level=info msg="TearDown network for sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.515465275Z" level=info msg="Ensure that sandbox f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c in task-service has been cleanup successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.522735272Z" level=info msg="RemovePodSandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.527209568Z" level=info msg="StopPodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.567899310Z" level=info msg="TearDown network for sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.580009768Z" level=info msg="StopPodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.584543774Z" level=info msg="RemovePodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.584681408Z" level=info msg="Forcibly stopping sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.627429416Z" level=info msg="TearDown network for sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.639029134Z" level=info msg="Ensure that sandbox ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8 in task-service has been cleanup successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.650788574Z" level=info msg="RemovePodSandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" returns successfully"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.095122327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.674748232Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.682194657Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.685682016Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.685724248Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.688515239Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.858397788Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.858630266Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               no-preload-863373
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-863373
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-863373
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_17_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:17:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-863373
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:19:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-863373
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                aa6b9168-cbdb-42d6-933d-d9f7f74ef280
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-j2lwq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-no-preload-863373                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         109s
	  kube-system                 kindnet-mm548                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-863373              250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-863373     200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-t6l8g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-863373              100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 metrics-server-5d785b57d4-25rzl               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         79s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-2sr6v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5gjbr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node no-preload-863373 event: Registered Node no-preload-863373 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-863373 event: Registered Node no-preload-863373 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:19:10 up  1:01,  0 user,  load average: 1.35, 1.60, 1.73
	Linux no-preload-863373 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.373785    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12c641bd941acee30af44f721a732c22-usr-local-share-ca-certificates\") pod \"kube-apiserver-no-preload-863373\" (UID: \"12c641bd941acee30af44f721a732c22\") " pod="kube-system/kube-apiserver-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.373866    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b801536985c6020d6311fc5410e1bf9b-etcd-certs\") pod \"etcd-no-preload-863373\" (UID: \"b801536985c6020d6311fc5410e1bf9b\") " pod="kube-system/etcd-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374021    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b801536985c6020d6311fc5410e1bf9b-etcd-data\") pod \"etcd-no-preload-863373\" (UID: \"b801536985c6020d6311fc5410e1bf9b\") " pod="kube-system/etcd-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374106    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12c641bd941acee30af44f721a732c22-etc-ca-certificates\") pod \"kube-apiserver-no-preload-863373\" (UID: \"12c641bd941acee30af44f721a732c22\") " pod="kube-system/kube-apiserver-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374465    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e51a4f40f514b176d42b2ded59e69497-ca-certs\") pod \"kube-controller-manager-no-preload-863373\" (UID: \"e51a4f40f514b176d42b2ded59e69497\") " pod="kube-system/kube-controller-manager-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374572    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e51a4f40f514b176d42b2ded59e69497-usr-local-share-ca-certificates\") pod \"kube-controller-manager-no-preload-863373\" (UID: \"e51a4f40f514b176d42b2ded59e69497\") " pod="kube-system/kube-controller-manager-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.414758    2383 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.475641    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e47dc59-09d2-4fc3-951f-e140d54cdab2-tmp\") pod \"storage-provisioner\" (UID: \"5e47dc59-09d2-4fc3-951f-e140d54cdab2\") " pod="kube-system/storage-provisioner"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.475936    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-xtables-lock\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476147    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-cni-cfg\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476333    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e82261b-19a3-458f-b1f2-1d690115afc1-xtables-lock\") pod \"kube-proxy-t6l8g\" (UID: \"3e82261b-19a3-458f-b1f2-1d690115afc1\") " pod="kube-system/kube-proxy-t6l8g"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476507    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-lib-modules\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476629    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e82261b-19a3-458f-b1f2-1d690115afc1-lib-modules\") pod \"kube-proxy-t6l8g\" (UID: \"3e82261b-19a3-458f-b1f2-1d690115afc1\") " pod="kube-system/kube-proxy-t6l8g"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.686853    2383 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.686937    2383 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.687276    2383 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-25rzl_kube-system(ffcb8454-7d9d-4854-9e0f-57c3468a22d3): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.687320    2383 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-25rzl" podUID="ffcb8454-7d9d-4854-9e0f-57c3468a22d3"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787143    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-863373" containerName="kube-scheduler"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787547    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-863373" containerName="kube-controller-manager"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787870    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-863373" containerName="kube-apiserver"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.788168    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-863373" containerName="etcd"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.858971    2383 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859028    2383 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859237    2383 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-2sr6v_kubernetes-dashboard(0af8c712-b414-4036-b067-b58dac667efd): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859278    2383 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-2sr6v" podUID="0af8c712-b414-4036-b067-b58dac667efd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863373 -n no-preload-863373
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-863373 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v: exit status 1 (84.487859ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-25rzl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-2sr6v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-863373
helpers_test.go:244: (dbg) docker inspect no-preload-863373:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc",
	        "Created": "2025-12-28T07:16:51.565098217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:18:04.489231138Z",
	            "FinishedAt": "2025-12-28T07:18:03.671684015Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/hostname",
	        "HostsPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/hosts",
	        "LogPath": "/var/lib/docker/containers/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc/0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc-json.log",
	        "Name": "/no-preload-863373",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-863373:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-863373",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0704657a2bc1d7763cc84ab217e17cf1363771501bb87476376f04e59bd1fccc",
	                "LowerDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31ae2e6596f93f49c609eb8cac7eec713e7ff9be5361fbe904c7788208b4a3ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-863373",
	                "Source": "/var/lib/docker/volumes/no-preload-863373/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-863373",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-863373",
	                "name.minikube.sigs.k8s.io": "no-preload-863373",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dabbc96817aa32381878570980099406dffffef47a5c4082d7526166d5877980",
	            "SandboxKey": "/var/run/docker/netns/dabbc96817aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-863373": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:24:7e:9a:ff:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45333eb9e7892b033e53341f3e0ac61e624982205e2c122d87a85b3bd8478a9f",
	                    "EndpointID": "21b07798d1a4925732ad5b05c1a95023c466f946bc6f072d1a57e6e6810533ee",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-863373",
	                        "0704657a2bc1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863373 -n no-preload-863373
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-863373 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:11 UTC │                     │
	│ ssh     │ force-systemd-env-782848 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ delete  │ -p force-systemd-env-782848                                                                                                                                                                                                                         │ force-systemd-env-782848  │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:13 UTC │
	│ start   │ -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:13 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ ssh     │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p cert-options-913529                                                                                                                                                                                                                              │ cert-options-913529       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ stop    │ -p old-k8s-version-251758 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
	│ start   │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
	│ image   │ old-k8s-version-251758 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ pause   │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ delete  │ -p old-k8s-version-251758                                                                                                                                                                                                                           │ old-k8s-version-251758    │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
	│ stop    │ -p no-preload-863373 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
	│ addons  │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ image   │ no-preload-863373 image list --format=json                                                                                                                                                                                                          │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ pause   │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ unpause │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-863373         │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:18:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:18:04.210105  223335 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:18:04.210218  223335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:18:04.210229  223335 out.go:374] Setting ErrFile to fd 2...
	I1228 07:18:04.210235  223335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:18:04.210485  223335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:18:04.210833  223335 out.go:368] Setting JSON to false
	I1228 07:18:04.211668  223335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3634,"bootTime":1766902650,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:18:04.211737  223335 start.go:143] virtualization:  
	I1228 07:18:04.214696  223335 out.go:179] * [no-preload-863373] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:18:04.218567  223335 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:18:04.218676  223335 notify.go:221] Checking for updates...
	I1228 07:18:04.224562  223335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:18:04.227469  223335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:04.230365  223335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:18:04.233196  223335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:18:04.236017  223335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:18:04.239421  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:04.239974  223335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:18:04.273577  223335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:18:04.273686  223335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:18:04.335584  223335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:18:04.326234863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:18:04.335691  223335 docker.go:319] overlay module found
	I1228 07:18:04.338880  223335 out.go:179] * Using the docker driver based on existing profile
	I1228 07:18:04.341707  223335 start.go:309] selected driver: docker
	I1228 07:18:04.341730  223335 start.go:928] validating driver "docker" against &{Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:04.341838  223335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:18:04.342588  223335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:18:04.401750  223335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:18:04.390907262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:18:04.402087  223335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:18:04.402124  223335 cni.go:84] Creating CNI manager for ""
	I1228 07:18:04.402185  223335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:18:04.402232  223335 start.go:353] cluster config:
	{Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:04.405461  223335 out.go:179] * Starting "no-preload-863373" primary control-plane node in "no-preload-863373" cluster
	I1228 07:18:04.408395  223335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:18:04.411369  223335 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:18:04.414099  223335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:18:04.414170  223335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:18:04.414245  223335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/config.json ...
	I1228 07:18:04.414544  223335 cache.go:107] acquiring lock: {Name:mk675ae57a43ad1dcd013ca7bfeabdb5cfff3e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414636  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 07:18:04.414648  223335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.851µs
	I1228 07:18:04.414662  223335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 07:18:04.414674  223335 cache.go:107] acquiring lock: {Name:mk25211b1bcacda03f06c284c2f1d87c293f500d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414709  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 07:18:04.414719  223335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 46.917µs
	I1228 07:18:04.414726  223335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 07:18:04.414735  223335 cache.go:107] acquiring lock: {Name:mkecaafbf95bf5e637e02091f1e57be37fd26cd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414766  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 07:18:04.414776  223335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 41.806µs
	I1228 07:18:04.414783  223335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 07:18:04.414792  223335 cache.go:107] acquiring lock: {Name:mk4aac8221878cba9f26c204ecfa3d180ffd3c99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414824  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 07:18:04.414833  223335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 42.519µs
	I1228 07:18:04.414847  223335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 07:18:04.414857  223335 cache.go:107] acquiring lock: {Name:mk95a06d630633758c0fce1c69bbc91e5f9c1763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414889  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 07:18:04.414899  223335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 43.184µs
	I1228 07:18:04.414906  223335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 07:18:04.414916  223335 cache.go:107] acquiring lock: {Name:mk8bad262aa0eb5c84fb05bde99edcd03a9862ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.414948  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1228 07:18:04.414957  223335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.027µs
	I1228 07:18:04.414963  223335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 07:18:04.414986  223335 cache.go:107] acquiring lock: {Name:mke834d38136a3f1bc976929fc24bcc96d745a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.415019  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 07:18:04.415029  223335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 44.373µs
	I1228 07:18:04.415035  223335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 07:18:04.415044  223335 cache.go:107] acquiring lock: {Name:mk167fc96a63a95f47f0169364d746a7806993b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.415076  223335 cache.go:115] /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 07:18:04.415085  223335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 41.772µs
	I1228 07:18:04.415091  223335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 07:18:04.415097  223335 cache.go:87] Successfully saved all images to host disk.
	I1228 07:18:04.434351  223335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:18:04.434371  223335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:18:04.434386  223335 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:18:04.434414  223335 start.go:360] acquireMachinesLock for no-preload-863373: {Name:mk40422d73ffad526263a6e1c84f556b25bc76b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:18:04.434487  223335 start.go:364] duration metric: took 58.413µs to acquireMachinesLock for "no-preload-863373"
	I1228 07:18:04.434508  223335 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:18:04.434513  223335 fix.go:54] fixHost starting: 
	I1228 07:18:04.434773  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:04.451680  223335 fix.go:112] recreateIfNeeded on no-preload-863373: state=Stopped err=<nil>
	W1228 07:18:04.451716  223335 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:18:04.455032  223335 out.go:252] * Restarting existing docker container for "no-preload-863373" ...
	I1228 07:18:04.455154  223335 cli_runner.go:164] Run: docker start no-preload-863373
	I1228 07:18:04.724808  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:04.758189  223335 kic.go:430] container "no-preload-863373" state is running.
	I1228 07:18:04.758585  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:04.789291  223335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/config.json ...
	I1228 07:18:04.789520  223335 machine.go:94] provisionDockerMachine start ...
	I1228 07:18:04.789580  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:04.814681  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:04.815001  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:04.815009  223335 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:18:04.815694  223335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:18:07.956090  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-863373
	
	I1228 07:18:07.956119  223335 ubuntu.go:182] provisioning hostname "no-preload-863373"
	I1228 07:18:07.956185  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:07.974423  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:07.974742  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:07.974758  223335 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-863373 && echo "no-preload-863373" | sudo tee /etc/hostname
	I1228 07:18:08.120237  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-863373
	
	I1228 07:18:08.120344  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.142670  223335 main.go:144] libmachine: Using SSH client type: native
	I1228 07:18:08.142997  223335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1228 07:18:08.143022  223335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-863373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-863373/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-863373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:18:08.280890  223335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:18:08.280929  223335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:18:08.280958  223335 ubuntu.go:190] setting up certificates
	I1228 07:18:08.280967  223335 provision.go:84] configureAuth start
	I1228 07:18:08.281034  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:08.299145  223335 provision.go:143] copyHostCerts
	I1228 07:18:08.299216  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:18:08.299239  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:18:08.299321  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:18:08.299435  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:18:08.299446  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:18:08.299476  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:18:08.299547  223335 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:18:08.299557  223335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:18:08.299583  223335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:18:08.299646  223335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.no-preload-863373 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-863373]
	I1228 07:18:08.553781  223335 provision.go:177] copyRemoteCerts
	I1228 07:18:08.553851  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:18:08.553895  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.571197  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.668183  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:18:08.686448  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:18:08.704354  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:18:08.721904  223335 provision.go:87] duration metric: took 440.912445ms to configureAuth
	I1228 07:18:08.721933  223335 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:18:08.722177  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:08.722195  223335 machine.go:97] duration metric: took 3.932666405s to provisionDockerMachine
	I1228 07:18:08.722214  223335 start.go:293] postStartSetup for "no-preload-863373" (driver="docker")
	I1228 07:18:08.722229  223335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:18:08.722301  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:18:08.722363  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.739998  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.840335  223335 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:18:08.843659  223335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:18:08.843690  223335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:18:08.843702  223335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:18:08.843775  223335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:18:08.843862  223335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:18:08.843964  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:18:08.851498  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:18:08.868737  223335 start.go:296] duration metric: took 146.503384ms for postStartSetup
	I1228 07:18:08.868816  223335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:18:08.868857  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:08.886416  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:08.985659  223335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:18:08.990529  223335 fix.go:56] duration metric: took 4.556008816s for fixHost
	I1228 07:18:08.990557  223335 start.go:83] releasing machines lock for "no-preload-863373", held for 4.556060321s
	I1228 07:18:08.990624  223335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-863373
	I1228 07:18:09.009999  223335 ssh_runner.go:195] Run: cat /version.json
	I1228 07:18:09.010030  223335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:18:09.010056  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:09.010085  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:09.035820  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:09.038178  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:09.226446  223335 ssh_runner.go:195] Run: systemctl --version
	I1228 07:18:09.233239  223335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:18:09.237834  223335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:18:09.237928  223335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:18:09.245528  223335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:18:09.245549  223335 start.go:496] detecting cgroup driver to use...
	I1228 07:18:09.245579  223335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:18:09.245629  223335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:18:09.261516  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:18:09.277164  223335 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:18:09.277260  223335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:18:09.292696  223335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:18:09.306633  223335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:18:09.412475  223335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:18:09.526698  223335 docker.go:234] disabling docker service ...
	I1228 07:18:09.526823  223335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:18:09.541636  223335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:18:09.554690  223335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:18:09.661341  223335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:18:09.780561  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:18:09.793330  223335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:18:09.806806  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:18:09.815963  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:18:09.824985  223335 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:18:09.825055  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:18:09.834061  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:18:09.843279  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:18:09.852260  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:18:09.861421  223335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:18:09.869678  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:18:09.878735  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:18:09.888119  223335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:18:09.897144  223335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:18:09.904614  223335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:18:09.912081  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:10.031060  223335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:18:10.177228  223335 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:18:10.177293  223335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:18:10.180839  223335 start.go:574] Will wait 60s for crictl version
	I1228 07:18:10.180956  223335 ssh_runner.go:195] Run: which crictl
	I1228 07:18:10.184302  223335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:18:10.210425  223335 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:18:10.210516  223335 ssh_runner.go:195] Run: containerd --version
	I1228 07:18:10.229509  223335 ssh_runner.go:195] Run: containerd --version
	I1228 07:18:10.253636  223335 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:18:10.256638  223335 cli_runner.go:164] Run: docker network inspect no-preload-863373 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:18:10.272952  223335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:18:10.276871  223335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:18:10.286598  223335 kubeadm.go:884] updating cluster {Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:18:10.286729  223335 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:18:10.286788  223335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:18:10.310966  223335 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:18:10.310993  223335 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:18:10.311002  223335 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1228 07:18:10.311103  223335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-863373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:18:10.311170  223335 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:18:10.338333  223335 cni.go:84] Creating CNI manager for ""
	I1228 07:18:10.338357  223335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:18:10.338376  223335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:18:10.338398  223335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-863373 NodeName:no-preload-863373 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:18:10.338526  223335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-863373"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:18:10.338595  223335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:18:10.346162  223335 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:18:10.346230  223335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:18:10.353752  223335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1228 07:18:10.366988  223335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:18:10.379587  223335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
	I1228 07:18:10.392268  223335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:18:10.395890  223335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:18:10.406107  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:10.514389  223335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:18:10.531222  223335 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373 for IP: 192.168.76.2
	I1228 07:18:10.531240  223335 certs.go:195] generating shared ca certs ...
	I1228 07:18:10.531257  223335 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:10.531406  223335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:18:10.531460  223335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:18:10.531473  223335 certs.go:257] generating profile certs ...
	I1228 07:18:10.531558  223335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.key
	I1228 07:18:10.531631  223335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.key.770dd85f
	I1228 07:18:10.531674  223335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.key
	I1228 07:18:10.531783  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:18:10.531819  223335 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:18:10.531831  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:18:10.531861  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:18:10.531889  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:18:10.531917  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:18:10.531970  223335 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:18:10.532649  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:18:10.555579  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:18:10.575470  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:18:10.594853  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:18:10.615158  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 07:18:10.634566  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:18:10.657760  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:18:10.677948  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:18:10.713366  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:18:10.776175  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:18:10.797138  223335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:18:10.815554  223335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:18:10.829954  223335 ssh_runner.go:195] Run: openssl version
	I1228 07:18:10.835921  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.843220  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:18:10.850734  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.854646  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.854732  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:18:10.895519  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:18:10.902867  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.910155  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:18:10.917712  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.921332  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.921409  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:18:10.963602  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:18:10.971617  223335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.980013  223335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:18:10.987604  223335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.991305  223335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:18:10.991377  223335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:18:11.033513  223335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:18:11.040944  223335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:18:11.044660  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:18:11.086532  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:18:11.127993  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:18:11.171766  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:18:11.221651  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:18:11.271022  223335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:18:11.345458  223335 kubeadm.go:401] StartCluster: {Name:no-preload-863373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-863373 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:18:11.345610  223335 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1228 07:18:11.388593  223335 cri.go:83] list returned 3 containers
	I1228 07:18:11.388673  223335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:18:11.404310  223335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:18:11.404333  223335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:18:11.404394  223335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:18:11.416870  223335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:18:11.417319  223335 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-863373" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:11.417428  223335 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-863373" cluster setting kubeconfig missing "no-preload-863373" context setting]
	I1228 07:18:11.417720  223335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.419282  223335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:18:11.430254  223335 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 07:18:11.430295  223335 kubeadm.go:602] duration metric: took 25.955678ms to restartPrimaryControlPlane
	I1228 07:18:11.430305  223335 kubeadm.go:403] duration metric: took 84.858959ms to StartCluster
	I1228 07:18:11.430320  223335 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.430386  223335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:18:11.431005  223335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:18:11.431225  223335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:18:11.431623  223335 config.go:182] Loaded profile config "no-preload-863373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:18:11.431618  223335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:18:11.431782  223335 addons.go:70] Setting storage-provisioner=true in profile "no-preload-863373"
	I1228 07:18:11.431798  223335 addons.go:239] Setting addon storage-provisioner=true in "no-preload-863373"
	W1228 07:18:11.431805  223335 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:18:11.431834  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.432329  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.432533  223335 addons.go:70] Setting default-storageclass=true in profile "no-preload-863373"
	I1228 07:18:11.432574  223335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-863373"
	I1228 07:18:11.432813  223335 addons.go:70] Setting metrics-server=true in profile "no-preload-863373"
	I1228 07:18:11.432836  223335 addons.go:239] Setting addon metrics-server=true in "no-preload-863373"
	W1228 07:18:11.432844  223335 addons.go:248] addon metrics-server should already be in state true
	I1228 07:18:11.432874  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.432988  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.433314  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.436919  223335 addons.go:70] Setting dashboard=true in profile "no-preload-863373"
	I1228 07:18:11.436951  223335 addons.go:239] Setting addon dashboard=true in "no-preload-863373"
	W1228 07:18:11.436960  223335 addons.go:248] addon dashboard should already be in state true
	I1228 07:18:11.437154  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.437066  223335 out.go:179] * Verifying Kubernetes components...
	I1228 07:18:11.439312  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.448626  223335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:18:11.517101  223335 addons.go:239] Setting addon default-storageclass=true in "no-preload-863373"
	W1228 07:18:11.517131  223335 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:18:11.517164  223335 host.go:66] Checking if "no-preload-863373" exists ...
	I1228 07:18:11.517652  223335 cli_runner.go:164] Run: docker container inspect no-preload-863373 --format={{.State.Status}}
	I1228 07:18:11.526533  223335 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:18:11.529975  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:18:11.530001  223335 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:18:11.530067  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.534838  223335 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:18:11.537689  223335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:18:11.540577  223335 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:18:11.540688  223335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:18:11.540704  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:18:11.540766  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.543381  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:18:11.543411  223335 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:18:11.543476  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.574365  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.599988  223335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:18:11.600011  223335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:18:11.600070  223335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-863373
	I1228 07:18:11.630274  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.641714  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.659574  223335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/no-preload-863373/id_rsa Username:docker}
	I1228 07:18:11.769637  223335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:18:11.829751  223335 node_ready.go:35] waiting up to 6m0s for node "no-preload-863373" to be "Ready" ...
	I1228 07:18:11.845571  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:18:11.845632  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:18:11.885019  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:18:11.885040  223335 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:18:11.941136  223335 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:18:11.941198  223335 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:18:11.949833  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:18:11.981069  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:18:12.017233  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:18:12.017299  223335 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:18:12.027357  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:18:12.248572  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:18:12.248637  223335 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:18:12.440818  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:18:12.440892  223335 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:18:12.577104  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:18:12.577181  223335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:18:12.620888  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:18:12.620965  223335 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:18:12.713946  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:18:12.714010  223335 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:18:12.745948  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:18:12.746025  223335 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:18:12.773897  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:18:12.773960  223335 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:18:12.817557  223335 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:18:12.817633  223335 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:18:12.866259  223335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:18:15.135447  223335 node_ready.go:49] node "no-preload-863373" is "Ready"
	I1228 07:18:15.135480  223335 node_ready.go:38] duration metric: took 3.3056549s for node "no-preload-863373" to be "Ready" ...
	I1228 07:18:15.135495  223335 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:18:15.135555  223335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:18:15.462247  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.512326263s)
	I1228 07:18:17.640276  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.659136774s)
	I1228 07:18:17.640365  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.612938994s)
	I1228 07:18:17.640375  223335 addons.go:495] Verifying addon metrics-server=true in "no-preload-863373"
	I1228 07:18:17.640533  223335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.77419249s)
	I1228 07:18:17.640712  223335 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.505139133s)
	I1228 07:18:17.640735  223335 api_server.go:72] duration metric: took 6.209481345s to wait for apiserver process to appear ...
	I1228 07:18:17.640742  223335 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:18:17.640758  223335 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:18:17.644240  223335 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-863373 addons enable metrics-server
	
	I1228 07:18:17.647231  223335 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1228 07:18:17.649152  223335 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:18:17.650313  223335 api_server.go:141] control plane version: v1.35.0
	I1228 07:18:17.650338  223335 api_server.go:131] duration metric: took 9.589428ms to wait for apiserver health ...
	I1228 07:18:17.650348  223335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:18:17.650567  223335 addons.go:530] duration metric: took 6.218958648s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1228 07:18:17.654208  223335 system_pods.go:59] 9 kube-system pods found
	I1228 07:18:17.654243  223335 system_pods.go:61] "coredns-7d764666f9-j2lwq" [7ed3a1ae-d5ec-4274-9264-52845d3e00a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:18:17.654252  223335 system_pods.go:61] "etcd-no-preload-863373" [bb47b3ac-c648-44af-9179-e21fd315a5f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:18:17.654262  223335 system_pods.go:61] "kindnet-mm548" [061dcf01-8219-4abc-93df-5e4c3392c108] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:18:17.654269  223335 system_pods.go:61] "kube-apiserver-no-preload-863373" [27e3f644-63c0-43ad-bd3e-f6e49dd52278] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:18:17.654276  223335 system_pods.go:61] "kube-controller-manager-no-preload-863373" [039b2227-a1e8-4e1f-bed3-7b8c943fd581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:18:17.654283  223335 system_pods.go:61] "kube-proxy-t6l8g" [3e82261b-19a3-458f-b1f2-1d690115afc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:18:17.654289  223335 system_pods.go:61] "kube-scheduler-no-preload-863373" [b41aab7f-79af-4f32-ac5b-3cf2ad737dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:18:17.654296  223335 system_pods.go:61] "metrics-server-5d785b57d4-25rzl" [ffcb8454-7d9d-4854-9e0f-57c3468a22d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:18:17.654313  223335 system_pods.go:61] "storage-provisioner" [5e47dc59-09d2-4fc3-951f-e140d54cdab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:18:17.654320  223335 system_pods.go:74] duration metric: took 3.965836ms to wait for pod list to return data ...
	I1228 07:18:17.654327  223335 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:18:17.656798  223335 default_sa.go:45] found service account: "default"
	I1228 07:18:17.656825  223335 default_sa.go:55] duration metric: took 2.491198ms for default service account to be created ...
	I1228 07:18:17.656834  223335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 07:18:17.659570  223335 system_pods.go:86] 9 kube-system pods found
	I1228 07:18:17.659605  223335 system_pods.go:89] "coredns-7d764666f9-j2lwq" [7ed3a1ae-d5ec-4274-9264-52845d3e00a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 07:18:17.659615  223335 system_pods.go:89] "etcd-no-preload-863373" [bb47b3ac-c648-44af-9179-e21fd315a5f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:18:17.659624  223335 system_pods.go:89] "kindnet-mm548" [061dcf01-8219-4abc-93df-5e4c3392c108] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 07:18:17.659632  223335 system_pods.go:89] "kube-apiserver-no-preload-863373" [27e3f644-63c0-43ad-bd3e-f6e49dd52278] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:18:17.659644  223335 system_pods.go:89] "kube-controller-manager-no-preload-863373" [039b2227-a1e8-4e1f-bed3-7b8c943fd581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:18:17.659651  223335 system_pods.go:89] "kube-proxy-t6l8g" [3e82261b-19a3-458f-b1f2-1d690115afc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 07:18:17.659666  223335 system_pods.go:89] "kube-scheduler-no-preload-863373" [b41aab7f-79af-4f32-ac5b-3cf2ad737dd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:18:17.659673  223335 system_pods.go:89] "metrics-server-5d785b57d4-25rzl" [ffcb8454-7d9d-4854-9e0f-57c3468a22d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 07:18:17.659680  223335 system_pods.go:89] "storage-provisioner" [5e47dc59-09d2-4fc3-951f-e140d54cdab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 07:18:17.659697  223335 system_pods.go:126] duration metric: took 2.85724ms to wait for k8s-apps to be running ...
	I1228 07:18:17.659705  223335 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 07:18:17.659764  223335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:18:17.672615  223335 system_svc.go:56] duration metric: took 12.901712ms WaitForService to wait for kubelet
	I1228 07:18:17.672642  223335 kubeadm.go:587] duration metric: took 6.241387509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:18:17.672659  223335 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:18:17.675572  223335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:18:17.675605  223335 node_conditions.go:123] node cpu capacity is 2
	I1228 07:18:17.675618  223335 node_conditions.go:105] duration metric: took 2.953865ms to run NodePressure ...
	I1228 07:18:17.675649  223335 start.go:242] waiting for startup goroutines ...
	I1228 07:18:17.675663  223335 start.go:247] waiting for cluster config update ...
	I1228 07:18:17.675675  223335 start.go:256] writing updated cluster config ...
	I1228 07:18:17.675964  223335 ssh_runner.go:195] Run: rm -f paused
	I1228 07:18:17.679665  223335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:18:17.682783  223335 pod_ready.go:83] waiting for pod "coredns-7d764666f9-j2lwq" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 07:18:19.689383  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:22.188555  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:24.688642  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:26.688757  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:29.188106  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:31.188312  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:33.188567  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:35.188655  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:37.188761  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:39.688883  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:42.189276  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:44.189644  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:46.688703  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:49.188146  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	W1228 07:18:51.190791  223335 pod_ready.go:104] pod "coredns-7d764666f9-j2lwq" is not "Ready", error: <nil>
	I1228 07:18:53.196638  223335 pod_ready.go:94] pod "coredns-7d764666f9-j2lwq" is "Ready"
	I1228 07:18:53.196670  223335 pod_ready.go:86] duration metric: took 35.513860449s for pod "coredns-7d764666f9-j2lwq" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.199594  223335 pod_ready.go:83] waiting for pod "etcd-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.209072  223335 pod_ready.go:94] pod "etcd-no-preload-863373" is "Ready"
	I1228 07:18:53.209102  223335 pod_ready.go:86] duration metric: took 9.481594ms for pod "etcd-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.211391  223335 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.216639  223335 pod_ready.go:94] pod "kube-apiserver-no-preload-863373" is "Ready"
	I1228 07:18:53.216709  223335 pod_ready.go:86] duration metric: took 5.287801ms for pod "kube-apiserver-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.219468  223335 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.386920  223335 pod_ready.go:94] pod "kube-controller-manager-no-preload-863373" is "Ready"
	I1228 07:18:53.386949  223335 pod_ready.go:86] duration metric: took 167.456125ms for pod "kube-controller-manager-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.587107  223335 pod_ready.go:83] waiting for pod "kube-proxy-t6l8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:53.986661  223335 pod_ready.go:94] pod "kube-proxy-t6l8g" is "Ready"
	I1228 07:18:53.986687  223335 pod_ready.go:86] duration metric: took 399.548208ms for pod "kube-proxy-t6l8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.187166  223335 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.586352  223335 pod_ready.go:94] pod "kube-scheduler-no-preload-863373" is "Ready"
	I1228 07:18:54.586384  223335 pod_ready.go:86] duration metric: took 399.153816ms for pod "kube-scheduler-no-preload-863373" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 07:18:54.586396  223335 pod_ready.go:40] duration metric: took 36.90666167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 07:18:54.637161  223335 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1228 07:18:54.640314  223335 out.go:203] 
	W1228 07:18:54.643220  223335 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1228 07:18:54.646050  223335 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:18:54.648843  223335 out.go:179] * Done! kubectl is now configured to use "no-preload-863373" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cd82a9b6bb7ca       66749159455b3       12 seconds ago       Running             storage-provisioner       2                   aa85ec172a655       storage-provisioner                         kube-system
	e8498ba6c7fd2       20b332c9a70d8       49 seconds ago       Running             kubernetes-dashboard      0                   45054230490ed       kubernetes-dashboard-b84665fb8-5gjbr        kubernetes-dashboard
	1f6698edd2aad       e08f4d9d2e6ed       54 seconds ago       Running             coredns                   1                   fa30d8b38b1b5       coredns-7d764666f9-j2lwq                    kube-system
	8a72355eb61b1       1611cd07b61d5       55 seconds ago       Running             busybox                   1                   ea4a79b855665       busybox                                     default
	d903784e30545       66749159455b3       55 seconds ago       Exited              storage-provisioner       1                   aa85ec172a655       storage-provisioner                         kube-system
	17028f4a8bdf4       de369f46c2ff5       55 seconds ago       Running             kube-proxy                1                   7953d967b9477       kube-proxy-t6l8g                            kube-system
	7a0af7c47c84a       c96ee3c174987       55 seconds ago       Running             kindnet-cni               1                   8e019971f7b5d       kindnet-mm548                               kube-system
	ad6e6187d14d1       271e49a0ebc56       About a minute ago   Running             etcd                      1                   ce2164338b232       etcd-no-preload-863373                      kube-system
	99dd45648c833       88898f1d1a62a       About a minute ago   Running             kube-controller-manager   1                   aa7531590caad       kube-controller-manager-no-preload-863373   kube-system
	615dec4099747       c3fcf259c473a       About a minute ago   Running             kube-apiserver            1                   60a7a579e1b57       kube-apiserver-no-preload-863373            kube-system
	adce0e91e1531       ddc8422d4d35a       About a minute ago   Running             kube-scheduler            1                   ac86e3248fdb4       kube-scheduler-no-preload-863373            kube-system
	1eabe3bb41409       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   da452e3bf4b46       busybox                                     default
	1fbaac41ada66       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   72c046ad534ee       coredns-7d764666f9-j2lwq                    kube-system
	b2e44eed719fa       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   e27b756f4c142       kindnet-mm548                               kube-system
	b58d0fb33e410       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   f2edb88c5130e       kube-proxy-t6l8g                            kube-system
	7232b6de7b7f1       88898f1d1a62a       About a minute ago   Exited              kube-controller-manager   0                   30c3c1a417aee       kube-controller-manager-no-preload-863373   kube-system
	5273cd83fffd8       ddc8422d4d35a       About a minute ago   Exited              kube-scheduler            0                   f346362e15fea       kube-scheduler-no-preload-863373            kube-system
	b5aa7e5bff890       271e49a0ebc56       About a minute ago   Exited              etcd                      0                   1d16241ac7917       etcd-no-preload-863373                      kube-system
	4c9946cb0eff6       c3fcf259c473a       About a minute ago   Exited              kube-apiserver            0                   7fd19e0d00e64       kube-apiserver-no-preload-863373            kube-system
	
	
	==> containerd <==
	Dec 28 07:18:59 no-preload-863373 containerd[556]: time="2025-12-28T07:18:59.783002200Z" level=info msg="StartContainer for \"cd82a9b6bb7ca8da371d784b05ef58671be745315dafe8d5afdbec56bcab4ba2\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.493487405Z" level=info msg="StopPodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.493996447Z" level=info msg="TearDown network for sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.494039090Z" level=info msg="StopPodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511408935Z" level=info msg="RemovePodSandbox for \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511458453Z" level=info msg="Forcibly stopping sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.511901296Z" level=info msg="TearDown network for sandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.515465275Z" level=info msg="Ensure that sandbox f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c in task-service has been cleanup successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.522735272Z" level=info msg="RemovePodSandbox \"f4b28ac76f76825f87ec5b29bba2b477460d4763a27d270136fffa983c9a867c\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.527209568Z" level=info msg="StopPodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.567899310Z" level=info msg="TearDown network for sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.580009768Z" level=info msg="StopPodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" returns successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.584543774Z" level=info msg="RemovePodSandbox for \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.584681408Z" level=info msg="Forcibly stopping sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\""
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.627429416Z" level=info msg="TearDown network for sandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.639029134Z" level=info msg="Ensure that sandbox ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8 in task-service has been cleanup successfully"
	Dec 28 07:19:08 no-preload-863373 containerd[556]: time="2025-12-28T07:19:08.650788574Z" level=info msg="RemovePodSandbox \"ca90c502a3675bc3a6f7a71174db50a393cf3231435a438637d41cfad0ea05a8\" returns successfully"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.095122327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.674748232Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.682194657Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.685682016Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.685724248Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.688515239Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.858397788Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:19:09 no-preload-863373 containerd[556]: time="2025-12-28T07:19:09.858630266Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               no-preload-863373
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-863373
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-863373
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_17_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:17:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-863373
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:19:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:19:09 +0000   Sun, 28 Dec 2025 07:17:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-863373
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                aa6b9168-cbdb-42d6-933d-d9f7f74ef280
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-j2lwq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-no-preload-863373                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-mm548                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-863373              250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-863373     200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-t6l8g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-863373              100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 metrics-server-5d785b57d4-25rzl               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         81s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-2sr6v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5gjbr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-863373 event: Registered Node no-preload-863373 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node no-preload-863373 event: Registered Node no-preload-863373 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:19:12 up  1:01,  0 user,  load average: 1.35, 1.60, 1.73
	Linux no-preload-863373 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374106    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12c641bd941acee30af44f721a732c22-etc-ca-certificates\") pod \"kube-apiserver-no-preload-863373\" (UID: \"12c641bd941acee30af44f721a732c22\") " pod="kube-system/kube-apiserver-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374465    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e51a4f40f514b176d42b2ded59e69497-ca-certs\") pod \"kube-controller-manager-no-preload-863373\" (UID: \"e51a4f40f514b176d42b2ded59e69497\") " pod="kube-system/kube-controller-manager-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.374572    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e51a4f40f514b176d42b2ded59e69497-usr-local-share-ca-certificates\") pod \"kube-controller-manager-no-preload-863373\" (UID: \"e51a4f40f514b176d42b2ded59e69497\") " pod="kube-system/kube-controller-manager-no-preload-863373"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.414758    2383 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.475641    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e47dc59-09d2-4fc3-951f-e140d54cdab2-tmp\") pod \"storage-provisioner\" (UID: \"5e47dc59-09d2-4fc3-951f-e140d54cdab2\") " pod="kube-system/storage-provisioner"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.475936    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-xtables-lock\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476147    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-cni-cfg\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476333    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e82261b-19a3-458f-b1f2-1d690115afc1-xtables-lock\") pod \"kube-proxy-t6l8g\" (UID: \"3e82261b-19a3-458f-b1f2-1d690115afc1\") " pod="kube-system/kube-proxy-t6l8g"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476507    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061dcf01-8219-4abc-93df-5e4c3392c108-lib-modules\") pod \"kindnet-mm548\" (UID: \"061dcf01-8219-4abc-93df-5e4c3392c108\") " pod="kube-system/kindnet-mm548"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: I1228 07:19:09.476629    2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e82261b-19a3-458f-b1f2-1d690115afc1-lib-modules\") pod \"kube-proxy-t6l8g\" (UID: \"3e82261b-19a3-458f-b1f2-1d690115afc1\") " pod="kube-system/kube-proxy-t6l8g"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.686853    2383 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.686937    2383 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.687276    2383 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-25rzl_kube-system(ffcb8454-7d9d-4854-9e0f-57c3468a22d3): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.687320    2383 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-25rzl" podUID="ffcb8454-7d9d-4854-9e0f-57c3468a22d3"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787143    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-863373" containerName="kube-scheduler"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787547    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-863373" containerName="kube-controller-manager"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.787870    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-863373" containerName="kube-apiserver"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.788168    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-863373" containerName="etcd"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.858971    2383 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859028    2383 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859237    2383 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-2sr6v_kubernetes-dashboard(0af8c712-b414-4036-b067-b58dac667efd): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:19:09 no-preload-863373 kubelet[2383]: E1228 07:19:09.859278    2383 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-2sr6v" podUID="0af8c712-b414-4036-b067-b58dac667efd"
	Dec 28 07:19:10 no-preload-863373 kubelet[2383]: E1228 07:19:10.790167    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-863373" containerName="etcd"
	Dec 28 07:19:10 no-preload-863373 kubelet[2383]: E1228 07:19:10.790571    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-863373" containerName="kube-scheduler"
	Dec 28 07:19:10 no-preload-863373 kubelet[2383]: E1228 07:19:10.791024    2383 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-863373" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863373 -n no-preload-863373
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-863373 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v: exit status 1 (81.979521ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-25rzl" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-2sr6v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-863373 describe pod metrics-server-5d785b57d4-25rzl dashboard-metrics-scraper-867fb5f87b-2sr6v: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-468470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-468470 -n embed-certs-468470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-468470 -n embed-certs-468470: exit status 2 (489.458077ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-468470 -n embed-certs-468470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-468470 -n embed-certs-468470: exit status 2 (536.016077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-468470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-468470 -n embed-certs-468470
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-468470 -n embed-certs-468470
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-468470
helpers_test.go:244: (dbg) docker inspect embed-certs-468470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443",
	        "Created": "2025-12-28T07:19:20.700123673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:20:25.319933386Z",
	            "FinishedAt": "2025-12-28T07:20:24.194180083Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/hostname",
	        "HostsPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/hosts",
	        "LogPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443-json.log",
	        "Name": "/embed-certs-468470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-468470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-468470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443",
	                "LowerDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-468470",
	                "Source": "/var/lib/docker/volumes/embed-certs-468470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-468470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-468470",
	                "name.minikube.sigs.k8s.io": "embed-certs-468470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9739dc00421d5ea8a8ab9bd499eba190cc08a8f7668282249813b6ea0c193c2e",
	            "SandboxKey": "/var/run/docker/netns/9739dc00421d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-468470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b7:e6:d9:c8:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ea6e58d49dc7fb8843557882dde530e89cbe22a2196cb716224bb2dd25a58ac",
	                    "EndpointID": "d3e8b198756d41337960f37e12faf6398220650e22ce40df074734a633c87e0c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-468470",
	                        "53693b36f335"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-468470 -n embed-certs-468470
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-468470 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-468470 logs -n 25: (1.595286762s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                        │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
	│ stop    │ -p no-preload-863373 --alsologtostderr -v=3                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
	│ addons  │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                   │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                  │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ image   │ no-preload-863373 image list --format=json                                                                                                                                     │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ pause   │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ unpause │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                           │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                           │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:20 UTC │
	│ ssh     │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml                                                                                                                  │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p force-systemd-flag-257442                                                                                                                                                   │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ delete  │ -p disable-driver-mounts-120791                                                                                                                                                │ disable-driver-mounts-120791 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                       │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ stop    │ -p embed-certs-468470 --alsologtostderr -v=3                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-468470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                  │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ stop    │ -p default-k8s-diff-port-450028 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │                     │
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                    │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:21:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:21:16.275242  239132 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:21:16.275445  239132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:21:16.275472  239132 out.go:374] Setting ErrFile to fd 2...
	I1228 07:21:16.275492  239132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:21:16.275771  239132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:21:16.276180  239132 out.go:368] Setting JSON to false
	I1228 07:21:16.278343  239132 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3826,"bootTime":1766902650,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:21:16.278465  239132 start.go:143] virtualization:  
	I1228 07:21:16.283614  239132 out.go:179] * [default-k8s-diff-port-450028] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:21:16.286732  239132 notify.go:221] Checking for updates...
	I1228 07:21:16.289989  239132 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:21:16.293346  239132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:21:16.296291  239132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:16.299171  239132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:21:16.302623  239132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:21:16.305593  239132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:21:16.309039  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:16.309585  239132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:21:16.343236  239132 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:21:16.343330  239132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:21:16.403472  239132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:21:16.393288911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:21:16.403574  239132 docker.go:319] overlay module found
	I1228 07:21:16.406661  239132 out.go:179] * Using the docker driver based on existing profile
	I1228 07:21:16.409506  239132 start.go:309] selected driver: docker
	I1228 07:21:16.409523  239132 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:16.409666  239132 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:21:16.410493  239132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:21:16.468163  239132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:21:16.459382207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:21:16.468613  239132 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:21:16.468645  239132 cni.go:84] Creating CNI manager for ""
	I1228 07:21:16.468706  239132 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:21:16.468747  239132 start.go:353] cluster config:
	{Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:16.473648  239132 out.go:179] * Starting "default-k8s-diff-port-450028" primary control-plane node in "default-k8s-diff-port-450028" cluster
	I1228 07:21:16.477939  239132 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:21:16.480933  239132 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:21:16.483827  239132 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:21:16.483875  239132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:21:16.483886  239132 cache.go:65] Caching tarball of preloaded images
	I1228 07:21:16.483909  239132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:21:16.483988  239132 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:21:16.483999  239132 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:21:16.484117  239132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/config.json ...
	I1228 07:21:16.503152  239132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:21:16.503176  239132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:21:16.503193  239132 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:21:16.503226  239132 start.go:360] acquireMachinesLock for default-k8s-diff-port-450028: {Name:mkeeedd54bbc599ba85ff6f61843f99b2783c4c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:21:16.503283  239132 start.go:364] duration metric: took 36.111µs to acquireMachinesLock for "default-k8s-diff-port-450028"
	I1228 07:21:16.503306  239132 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:21:16.503318  239132 fix.go:54] fixHost starting: 
	I1228 07:21:16.503577  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:16.521563  239132 fix.go:112] recreateIfNeeded on default-k8s-diff-port-450028: state=Stopped err=<nil>
	W1228 07:21:16.521595  239132 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:21:16.524786  239132 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-450028" ...
	I1228 07:21:16.524868  239132 cli_runner.go:164] Run: docker start default-k8s-diff-port-450028
	I1228 07:21:16.780017  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:16.808012  239132 kic.go:430] container "default-k8s-diff-port-450028" state is running.
	I1228 07:21:16.808409  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:16.831673  239132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/config.json ...
	I1228 07:21:16.831916  239132 machine.go:94] provisionDockerMachine start ...
	I1228 07:21:16.831973  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:16.853188  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:16.853504  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:16.853513  239132 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:21:16.854176  239132 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:21:19.988138  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-450028
	
	I1228 07:21:19.988166  239132 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-450028"
	I1228 07:21:19.988245  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.018603  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:20.018939  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:20.018960  239132 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-450028 && echo "default-k8s-diff-port-450028" | sudo tee /etc/hostname
	I1228 07:21:20.166339  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-450028
	
	I1228 07:21:20.166495  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.185372  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:20.185698  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:20.185721  239132 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-450028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-450028/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-450028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:21:20.320861  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:21:20.320897  239132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:21:20.320921  239132 ubuntu.go:190] setting up certificates
	I1228 07:21:20.320930  239132 provision.go:84] configureAuth start
	I1228 07:21:20.320994  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:20.340995  239132 provision.go:143] copyHostCerts
	I1228 07:21:20.341064  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:21:20.341086  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:21:20.341165  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:21:20.341269  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:21:20.341281  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:21:20.341309  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:21:20.341379  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:21:20.341386  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:21:20.341414  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:21:20.341477  239132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-450028 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-450028 localhost minikube]
	I1228 07:21:20.618564  239132 provision.go:177] copyRemoteCerts
	I1228 07:21:20.618630  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:21:20.618680  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.637420  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:20.737295  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:21:20.755454  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 07:21:20.772924  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:21:20.790287  239132 provision.go:87] duration metric: took 469.332526ms to configureAuth
	I1228 07:21:20.790336  239132 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:21:20.790527  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:20.790543  239132 machine.go:97] duration metric: took 3.958620266s to provisionDockerMachine
	I1228 07:21:20.790552  239132 start.go:293] postStartSetup for "default-k8s-diff-port-450028" (driver="docker")
	I1228 07:21:20.790561  239132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:21:20.790616  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:21:20.790667  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.809297  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:20.908780  239132 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:21:20.912575  239132 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:21:20.912604  239132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:21:20.912617  239132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:21:20.912673  239132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:21:20.912767  239132 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:21:20.912881  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:21:20.921205  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:21:20.939015  239132 start.go:296] duration metric: took 148.448297ms for postStartSetup
	I1228 07:21:20.939154  239132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:21:20.939199  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.958008  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.058104  239132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:21:21.062926  239132 fix.go:56] duration metric: took 4.559600901s for fixHost
	I1228 07:21:21.062954  239132 start.go:83] releasing machines lock for "default-k8s-diff-port-450028", held for 4.559658452s
	I1228 07:21:21.063024  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:21.080040  239132 ssh_runner.go:195] Run: cat /version.json
	I1228 07:21:21.080098  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:21.080400  239132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:21:21.080512  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:21.097997  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.101101  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.196081  239132 ssh_runner.go:195] Run: systemctl --version
	I1228 07:21:21.296900  239132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:21:21.302004  239132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:21:21.302074  239132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:21:21.310460  239132 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:21:21.310482  239132 start.go:496] detecting cgroup driver to use...
	I1228 07:21:21.310516  239132 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:21:21.310567  239132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:21:21.334255  239132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:21:21.348960  239132 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:21:21.349054  239132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:21:21.366205  239132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:21:21.379673  239132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:21:21.501457  239132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:21:21.620802  239132 docker.go:234] disabling docker service ...
	I1228 07:21:21.620924  239132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:21:21.635997  239132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:21:21.649445  239132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:21:21.766509  239132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:21:21.890775  239132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:21:21.903699  239132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:21:21.918987  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:21:21.929415  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:21:21.938715  239132 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:21:21.938811  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:21:21.948302  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:21:21.958495  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:21:21.967725  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:21:21.977019  239132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:21:21.985358  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:21:21.994850  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:21:22.005317  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:21:22.017132  239132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:21:22.026926  239132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:21:22.035532  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:22.159559  239132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:21:22.344544  239132 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:21:22.344656  239132 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:21:22.349412  239132 start.go:574] Will wait 60s for crictl version
	I1228 07:21:22.349526  239132 ssh_runner.go:195] Run: which crictl
	I1228 07:21:22.353443  239132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:21:22.378339  239132 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:21:22.378471  239132 ssh_runner.go:195] Run: containerd --version
	I1228 07:21:22.399175  239132 ssh_runner.go:195] Run: containerd --version
	I1228 07:21:22.428950  239132 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:21:22.432038  239132 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-450028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:21:22.449060  239132 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:21:22.452980  239132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:21:22.462956  239132 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:21:22.463078  239132 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:21:22.463145  239132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:21:22.523378  239132 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:21:22.523400  239132 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:21:22.523467  239132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:21:22.557130  239132 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:21:22.557148  239132 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:21:22.557156  239132 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:21:22.557253  239132 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-450028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:21:22.557314  239132 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:21:22.586073  239132 cni.go:84] Creating CNI manager for ""
	I1228 07:21:22.586100  239132 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:21:22.586122  239132 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:21:22.586151  239132 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-450028 NodeName:default-k8s-diff-port-450028 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:21:22.586288  239132 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-450028"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:21:22.586370  239132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:21:22.594495  239132 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:21:22.594575  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:21:22.602495  239132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:21:22.616161  239132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:21:22.642590  239132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2261 bytes)
	I1228 07:21:22.657113  239132 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:21:22.660941  239132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:21:22.671475  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:22.797073  239132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:21:22.818370  239132 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028 for IP: 192.168.85.2
	I1228 07:21:22.818394  239132 certs.go:195] generating shared ca certs ...
	I1228 07:21:22.818409  239132 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:22.818595  239132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:21:22.818664  239132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:21:22.818678  239132 certs.go:257] generating profile certs ...
	I1228 07:21:22.818807  239132 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.key
	I1228 07:21:22.818917  239132 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.key.a35fc308
	I1228 07:21:22.819022  239132 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.key
	I1228 07:21:22.819162  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:21:22.819233  239132 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:21:22.819252  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:21:22.819297  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:21:22.819345  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:21:22.819376  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:21:22.819449  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:21:22.820082  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:21:22.840502  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:21:22.859610  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:21:22.877218  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:21:22.903202  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:21:22.924065  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:21:22.945243  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:21:22.979691  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:21:23.007886  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:21:23.031481  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:21:23.053849  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:21:23.074129  239132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:21:23.087658  239132 ssh_runner.go:195] Run: openssl version
	I1228 07:21:23.096294  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.104858  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:21:23.114048  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.118040  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.118182  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.164320  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:21:23.174421  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.183267  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:21:23.191805  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.196207  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.196276  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.238514  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:21:23.247772  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.255121  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:21:23.262956  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.267147  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.267214  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.308409  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:21:23.315795  239132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:21:23.319571  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:21:23.362320  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:21:23.404099  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:21:23.445152  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:21:23.487550  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:21:23.528965  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:21:23.581989  239132 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:23.582254  239132 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:21:23.593365  239132 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:21:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:21:23.593498  239132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:21:23.602959  239132 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:21:23.603030  239132 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:21:23.603101  239132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:21:23.615967  239132 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:21:23.616835  239132 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-450028" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:23.617419  239132 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-450028" cluster setting kubeconfig missing "default-k8s-diff-port-450028" context setting]
	I1228 07:21:23.618187  239132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.620123  239132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:21:23.627973  239132 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:21:23.628045  239132 kubeadm.go:602] duration metric: took 24.994987ms to restartPrimaryControlPlane
	I1228 07:21:23.628074  239132 kubeadm.go:403] duration metric: took 46.097204ms to StartCluster
	I1228 07:21:23.628112  239132 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.628183  239132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:23.629668  239132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.629935  239132 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:21:23.630459  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:23.630511  239132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:21:23.630689  239132 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.630730  239132 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.630745  239132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-450028"
	I1228 07:21:23.631079  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.630717  239132 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.631286  239132 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:21:23.631324  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.631747  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.631941  239132 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.634177  239132 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.634202  239132 addons.go:248] addon metrics-server should already be in state true
	I1228 07:21:23.634257  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.634705  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.631977  239132 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.638250  239132 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.638264  239132 addons.go:248] addon dashboard should already be in state true
	I1228 07:21:23.638307  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.638899  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.634047  239132 out.go:179] * Verifying Kubernetes components...
	I1228 07:21:23.648816  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:23.697883  239132 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.697903  239132 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:21:23.697927  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.698370  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.723458  239132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:21:23.723466  239132 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:21:23.723558  239132 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:21:23.727780  239132 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:21:23.730019  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:21:23.730042  239132 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:21:23.730116  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.731590  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:21:23.731618  239132 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:21:23.731681  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.738921  239132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:21:23.738946  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:21:23.739011  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.766524  239132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:21:23.766545  239132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:21:23.766605  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.794988  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.807720  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.834757  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.842215  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:24.008665  239132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:21:24.111765  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:21:24.111844  239132 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:21:24.133230  239132 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-450028" to be "Ready" ...
	I1228 07:21:24.181208  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:21:24.181226  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:21:24.193489  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:21:24.193509  239132 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:21:24.214149  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:21:24.230440  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:21:24.239488  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:21:24.239562  239132 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:21:24.240118  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:21:24.240170  239132 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:21:24.290157  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:21:24.290235  239132 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:21:24.293925  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:21:24.293997  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:21:24.317608  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:21:24.340985  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:21:24.341059  239132 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:21:24.480119  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:21:24.480210  239132 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:21:24.635171  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:21:24.635249  239132 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:21:24.760983  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:21:24.761075  239132 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:21:24.886775  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:21:24.886837  239132 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:21:24.912233  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:21:27.435693  239132 node_ready.go:49] node "default-k8s-diff-port-450028" is "Ready"
	I1228 07:21:27.435720  239132 node_ready.go:38] duration metric: took 3.302403514s for node "default-k8s-diff-port-450028" to be "Ready" ...
	I1228 07:21:27.435732  239132 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:21:27.435795  239132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:21:27.842913  239132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.628730154s)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	0a7f436a37ce2       ba04bb24b9575       8 seconds ago        Running             storage-provisioner       2                   cb90502878e38       storage-provisioner                          kube-system
	4e29295da0b64       20b332c9a70d8       48 seconds ago       Running             kubernetes-dashboard      0                   7bfa615ce5447       kubernetes-dashboard-b84665fb8-lg5dt         kubernetes-dashboard
	5482ab6d9de61       e08f4d9d2e6ed       52 seconds ago       Running             coredns                   1                   e01643a161aca       coredns-7d764666f9-p9hf5                     kube-system
	74af3ffe86a81       1611cd07b61d5       53 seconds ago       Running             busybox                   1                   c1ee8c39f757c       busybox                                      default
	31a84ebc7c901       ba04bb24b9575       53 seconds ago       Exited              storage-provisioner       1                   cb90502878e38       storage-provisioner                          kube-system
	8d2288015d0bc       de369f46c2ff5       53 seconds ago       Running             kube-proxy                1                   ee63d0e36a56f       kube-proxy-r6p5h                             kube-system
	4d1dec8f2f55a       c96ee3c174987       53 seconds ago       Running             kindnet-cni               1                   649e9f40fbab0       kindnet-tvkjv                                kube-system
	83523225b03dd       271e49a0ebc56       About a minute ago   Running             etcd                      1                   8cfeea1024cbb       etcd-embed-certs-468470                      kube-system
	b3a18272c5dc5       88898f1d1a62a       About a minute ago   Running             kube-controller-manager   1                   30d966d38f202       kube-controller-manager-embed-certs-468470   kube-system
	c0659d22eae27       c3fcf259c473a       About a minute ago   Running             kube-apiserver            1                   c32b7e15f3fbb       kube-apiserver-embed-certs-468470            kube-system
	54f3cd29fa370       ddc8422d4d35a       About a minute ago   Running             kube-scheduler            1                   b078869f7145f       kube-scheduler-embed-certs-468470            kube-system
	fb6b73cd22904       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   93bc160146f80       busybox                                      default
	ffdd521b39349       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   fc790c415f558       coredns-7d764666f9-p9hf5                     kube-system
	06df275ad107c       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   b7552867da407       kindnet-tvkjv                                kube-system
	7aac017ef5789       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   63919bfe01f02       kube-proxy-r6p5h                             kube-system
	830828e98568c       c3fcf259c473a       2 minutes ago        Exited              kube-apiserver            0                   631e0b1626323       kube-apiserver-embed-certs-468470            kube-system
	f8a6e4f4f99b1       ddc8422d4d35a       2 minutes ago        Exited              kube-scheduler            0                   79f78abaf4a6d       kube-scheduler-embed-certs-468470            kube-system
	b1ea13590f635       271e49a0ebc56       2 minutes ago        Exited              etcd                      0                   4f94157e5b830       etcd-embed-certs-468470                      kube-system
	bceb11158ba9c       88898f1d1a62a       2 minutes ago        Exited              kube-controller-manager   0                   8622289c8fd4b       kube-controller-manager-embed-certs-468470   kube-system
	
	
	==> containerd <==
	Dec 28 07:21:25 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:25.608868888Z" level=info msg="StartContainer for \"0a7f436a37ce29649ddff246998a4a9ead9c1a7bf800e626973278a53a545616\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.515843555Z" level=info msg="StopPodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.516388413Z" level=info msg="TearDown network for sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.516436848Z" level=info msg="StopPodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567111020Z" level=info msg="RemovePodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567164838Z" level=info msg="Forcibly stopping sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567613597Z" level=info msg="TearDown network for sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.571189046Z" level=info msg="Ensure that sandbox 198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a in task-service has been cleanup successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.585186807Z" level=info msg="RemovePodSandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.592352392Z" level=info msg="StopPodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.638908062Z" level=info msg="TearDown network for sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.639092531Z" level=info msg="StopPodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.683754654Z" level=info msg="RemovePodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.683808948Z" level=info msg="Forcibly stopping sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.754492888Z" level=info msg="TearDown network for sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.769407917Z" level=info msg="Ensure that sandbox 2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0 in task-service has been cleanup successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.781104004Z" level=info msg="RemovePodSandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" returns successfully"
	Dec 28 07:21:32 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:32.845452504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.578998936Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.589986283Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.596289099Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.596737079Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.599323612Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.776545645Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.776689137Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               embed-certs-468470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-468470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-468470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_19_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:19:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-468470
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:21:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-468470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                5a0fe077-79c0-4df2-9225-886171cbf73f
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-p9hf5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-embed-certs-468470                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-tvkjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-468470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-468470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-r6p5h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-468470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 metrics-server-5d785b57d4-8pr72               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         83s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-66nxs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-lg5dt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node embed-certs-468470 event: Registered Node embed-certs-468470 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node embed-certs-468470 event: Registered Node embed-certs-468470 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:21:34 up  1:04,  0 user,  load average: 4.77, 2.42, 1.99
	Linux embed-certs-468470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.288600    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51f383f6cb355a1b5cd6bb0ee063f3ae-kubeconfig\") pod \"kube-scheduler-embed-certs-468470\" (UID: \"51f383f6cb355a1b5cd6bb0ee063f3ae\") " pod="kube-system/kube-scheduler-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.288790    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23b27502-129e-42b1-b109-7cba9a746f06-lib-modules\") pod \"kube-proxy-r6p5h\" (UID: \"23b27502-129e-42b1-b109-7cba9a746f06\") " pod="kube-system/kube-proxy-r6p5h"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.289126    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-flexvolume-dir\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290047    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85a98c4ef752f5c7564288baaeedf141-k8s-certs\") pod \"kube-apiserver-embed-certs-468470\" (UID: \"85a98c4ef752f5c7564288baaeedf141\") " pod="kube-system/kube-apiserver-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290336    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-ca-certs\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290695    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-k8s-certs\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.291109    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85a98c4ef752f5c7564288baaeedf141-etc-ca-certificates\") pod \"kube-apiserver-embed-certs-468470\" (UID: \"85a98c4ef752f5c7564288baaeedf141\") " pod="kube-system/kube-apiserver-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.291524    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.597311    2408 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.597972    2408 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.598572    2408 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-8pr72_kube-system(804d1ee0-65ff-4c11-b80d-cf83a25cb95e): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.598801    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-8pr72" podUID="804d1ee0-65ff-4c11-b80d-cf83a25cb95e"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.776862    2408 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.776922    2408 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.777170    2408 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-66nxs_kubernetes-dashboard(6aeaf317-9f91-4a95-a416-da6a150160de): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.777228    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" podUID="6aeaf317-9f91-4a95-a416-da6a150160de"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.199533    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-468470" containerName="kube-controller-manager"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.200313    2408 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-8pr72" containerName="metrics-server"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.202010    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-468470" containerName="kube-scheduler"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.203630    2408 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" containerName="dashboard-metrics-scraper"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.209083    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-468470" containerName="etcd"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.215595    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-8pr72" podUID="804d1ee0-65ff-4c11-b80d-cf83a25cb95e"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.217117    2408 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p9hf5" containerName="coredns"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.257185    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" podUID="6aeaf317-9f91-4a95-a416-da6a150160de"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.261364    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-468470" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-468470 -n embed-certs-468470
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-468470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs: exit status 1 (136.200205ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-8pr72" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-66nxs" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-468470
helpers_test.go:244: (dbg) docker inspect embed-certs-468470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443",
	        "Created": "2025-12-28T07:19:20.700123673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:20:25.319933386Z",
	            "FinishedAt": "2025-12-28T07:20:24.194180083Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/hostname",
	        "HostsPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/hosts",
	        "LogPath": "/var/lib/docker/containers/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443/53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443-json.log",
	        "Name": "/embed-certs-468470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-468470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-468470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53693b36f335d71ee4b9995326e02a0990156969ba8e880d36744f2a88e6c443",
	                "LowerDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e40f8911d79c40898d17428324b2dd7f4c825c2eb8bf059115ca172682e57f27/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-468470",
	                "Source": "/var/lib/docker/volumes/embed-certs-468470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-468470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-468470",
	                "name.minikube.sigs.k8s.io": "embed-certs-468470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9739dc00421d5ea8a8ab9bd499eba190cc08a8f7668282249813b6ea0c193c2e",
	            "SandboxKey": "/var/run/docker/netns/9739dc00421d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-468470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b7:e6:d9:c8:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ea6e58d49dc7fb8843557882dde530e89cbe22a2196cb716224bb2dd25a58ac",
	                    "EndpointID": "d3e8b198756d41337960f37e12faf6398220650e22ce40df074734a633c87e0c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-468470",
	                        "53693b36f335"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-468470 -n embed-certs-468470
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-468470 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-468470 logs -n 25: (1.309304102s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                        │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
	│ stop    │ -p no-preload-863373 --alsologtostderr -v=3                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
	│ addons  │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                   │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ start   │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                  │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
	│ image   │ no-preload-863373 image list --format=json                                                                                                                                     │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ pause   │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ unpause │ -p no-preload-863373 --alsologtostderr -v=1                                                                                                                                    │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                           │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p no-preload-863373                                                                                                                                                           │ no-preload-863373            │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:20 UTC │
	│ ssh     │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml                                                                                                                  │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p force-systemd-flag-257442                                                                                                                                                   │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ delete  │ -p disable-driver-mounts-120791                                                                                                                                                │ disable-driver-mounts-120791 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                       │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ stop    │ -p embed-certs-468470 --alsologtostderr -v=3                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-468470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                  │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ stop    │ -p default-k8s-diff-port-450028 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │                     │
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                    │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                   │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:21:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:21:16.275242  239132 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:21:16.275445  239132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:21:16.275472  239132 out.go:374] Setting ErrFile to fd 2...
	I1228 07:21:16.275492  239132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:21:16.275771  239132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:21:16.276180  239132 out.go:368] Setting JSON to false
	I1228 07:21:16.278343  239132 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3826,"bootTime":1766902650,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:21:16.278465  239132 start.go:143] virtualization:  
	I1228 07:21:16.283614  239132 out.go:179] * [default-k8s-diff-port-450028] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:21:16.286732  239132 notify.go:221] Checking for updates...
	I1228 07:21:16.289989  239132 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:21:16.293346  239132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:21:16.296291  239132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:16.299171  239132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:21:16.302623  239132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:21:16.305593  239132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:21:16.309039  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:16.309585  239132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:21:16.343236  239132 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:21:16.343330  239132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:21:16.403472  239132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:21:16.393288911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:21:16.403574  239132 docker.go:319] overlay module found
	I1228 07:21:16.406661  239132 out.go:179] * Using the docker driver based on existing profile
	I1228 07:21:16.409506  239132 start.go:309] selected driver: docker
	I1228 07:21:16.409523  239132 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:16.409666  239132 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:21:16.410493  239132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:21:16.468163  239132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:21:16.459382207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:21:16.468613  239132 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 07:21:16.468645  239132 cni.go:84] Creating CNI manager for ""
	I1228 07:21:16.468706  239132 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:21:16.468747  239132 start.go:353] cluster config:
	{Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:16.473648  239132 out.go:179] * Starting "default-k8s-diff-port-450028" primary control-plane node in "default-k8s-diff-port-450028" cluster
	I1228 07:21:16.477939  239132 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:21:16.480933  239132 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:21:16.483827  239132 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:21:16.483875  239132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:21:16.483886  239132 cache.go:65] Caching tarball of preloaded images
	I1228 07:21:16.483909  239132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:21:16.483988  239132 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:21:16.483999  239132 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:21:16.484117  239132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/config.json ...
	I1228 07:21:16.503152  239132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:21:16.503176  239132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:21:16.503193  239132 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:21:16.503226  239132 start.go:360] acquireMachinesLock for default-k8s-diff-port-450028: {Name:mkeeedd54bbc599ba85ff6f61843f99b2783c4c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:21:16.503283  239132 start.go:364] duration metric: took 36.111µs to acquireMachinesLock for "default-k8s-diff-port-450028"
	I1228 07:21:16.503306  239132 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:21:16.503318  239132 fix.go:54] fixHost starting: 
	I1228 07:21:16.503577  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:16.521563  239132 fix.go:112] recreateIfNeeded on default-k8s-diff-port-450028: state=Stopped err=<nil>
	W1228 07:21:16.521595  239132 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:21:16.524786  239132 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-450028" ...
	I1228 07:21:16.524868  239132 cli_runner.go:164] Run: docker start default-k8s-diff-port-450028
	I1228 07:21:16.780017  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:16.808012  239132 kic.go:430] container "default-k8s-diff-port-450028" state is running.
	I1228 07:21:16.808409  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:16.831673  239132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/config.json ...
	I1228 07:21:16.831916  239132 machine.go:94] provisionDockerMachine start ...
	I1228 07:21:16.831973  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:16.853188  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:16.853504  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:16.853513  239132 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:21:16.854176  239132 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:21:19.988138  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-450028
	
	I1228 07:21:19.988166  239132 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-450028"
	I1228 07:21:19.988245  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.018603  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:20.018939  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:20.018960  239132 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-450028 && echo "default-k8s-diff-port-450028" | sudo tee /etc/hostname
	I1228 07:21:20.166339  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-450028
	
	I1228 07:21:20.166495  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.185372  239132 main.go:144] libmachine: Using SSH client type: native
	I1228 07:21:20.185698  239132 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1228 07:21:20.185721  239132 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-450028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-450028/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-450028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:21:20.320861  239132 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:21:20.320897  239132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:21:20.320921  239132 ubuntu.go:190] setting up certificates
	I1228 07:21:20.320930  239132 provision.go:84] configureAuth start
	I1228 07:21:20.320994  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:20.340995  239132 provision.go:143] copyHostCerts
	I1228 07:21:20.341064  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:21:20.341086  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:21:20.341165  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:21:20.341269  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:21:20.341281  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:21:20.341309  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:21:20.341379  239132 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:21:20.341386  239132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:21:20.341414  239132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:21:20.341477  239132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-450028 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-450028 localhost minikube]
	I1228 07:21:20.618564  239132 provision.go:177] copyRemoteCerts
	I1228 07:21:20.618630  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:21:20.618680  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.637420  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:20.737295  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 07:21:20.755454  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 07:21:20.772924  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:21:20.790287  239132 provision.go:87] duration metric: took 469.332526ms to configureAuth
	I1228 07:21:20.790336  239132 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:21:20.790527  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:20.790543  239132 machine.go:97] duration metric: took 3.958620266s to provisionDockerMachine
	I1228 07:21:20.790552  239132 start.go:293] postStartSetup for "default-k8s-diff-port-450028" (driver="docker")
	I1228 07:21:20.790561  239132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:21:20.790616  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:21:20.790667  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.809297  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:20.908780  239132 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:21:20.912575  239132 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:21:20.912604  239132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:21:20.912617  239132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:21:20.912673  239132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:21:20.912767  239132 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:21:20.912881  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:21:20.921205  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:21:20.939015  239132 start.go:296] duration metric: took 148.448297ms for postStartSetup
	I1228 07:21:20.939154  239132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:21:20.939199  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:20.958008  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.058104  239132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:21:21.062926  239132 fix.go:56] duration metric: took 4.559600901s for fixHost
	I1228 07:21:21.062954  239132 start.go:83] releasing machines lock for "default-k8s-diff-port-450028", held for 4.559658452s
	I1228 07:21:21.063024  239132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-450028
	I1228 07:21:21.080040  239132 ssh_runner.go:195] Run: cat /version.json
	I1228 07:21:21.080098  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:21.080400  239132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:21:21.080512  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:21.097997  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.101101  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:21.196081  239132 ssh_runner.go:195] Run: systemctl --version
	I1228 07:21:21.296900  239132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:21:21.302004  239132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:21:21.302074  239132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:21:21.310460  239132 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 07:21:21.310482  239132 start.go:496] detecting cgroup driver to use...
	I1228 07:21:21.310516  239132 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1228 07:21:21.310567  239132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1228 07:21:21.334255  239132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:21:21.348960  239132 docker.go:218] disabling cri-docker service (if available) ...
	I1228 07:21:21.349054  239132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 07:21:21.366205  239132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 07:21:21.379673  239132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 07:21:21.501457  239132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 07:21:21.620802  239132 docker.go:234] disabling docker service ...
	I1228 07:21:21.620924  239132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 07:21:21.635997  239132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 07:21:21.649445  239132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 07:21:21.766509  239132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 07:21:21.890775  239132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:21:21.903699  239132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:21:21.918987  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:21:21.929415  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:21:21.938715  239132 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1228 07:21:21.938811  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1228 07:21:21.948302  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:21:21.958495  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:21:21.967725  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:21:21.977019  239132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:21:21.985358  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:21:21.994850  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:21:22.005317  239132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:21:22.017132  239132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:21:22.026926  239132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:21:22.035532  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:22.159559  239132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:21:22.344544  239132 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1228 07:21:22.344656  239132 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1228 07:21:22.349412  239132 start.go:574] Will wait 60s for crictl version
	I1228 07:21:22.349526  239132 ssh_runner.go:195] Run: which crictl
	I1228 07:21:22.353443  239132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:21:22.378339  239132 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1228 07:21:22.378471  239132 ssh_runner.go:195] Run: containerd --version
	I1228 07:21:22.399175  239132 ssh_runner.go:195] Run: containerd --version
	I1228 07:21:22.428950  239132 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1228 07:21:22.432038  239132 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-450028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:21:22.449060  239132 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:21:22.452980  239132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:21:22.462956  239132 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:21:22.463078  239132 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:21:22.463145  239132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:21:22.523378  239132 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:21:22.523400  239132 containerd.go:542] Images already preloaded, skipping extraction
	I1228 07:21:22.523467  239132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 07:21:22.557130  239132 containerd.go:635] all images are preloaded for containerd runtime.
	I1228 07:21:22.557148  239132 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:21:22.557156  239132 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 containerd true true} ...
	I1228 07:21:22.557253  239132 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-450028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:21:22.557314  239132 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1228 07:21:22.586073  239132 cni.go:84] Creating CNI manager for ""
	I1228 07:21:22.586100  239132 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:21:22.586122  239132 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:21:22.586151  239132 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-450028 NodeName:default-k8s-diff-port-450028 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:21:22.586288  239132 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-450028"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:21:22.586370  239132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:21:22.594495  239132 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:21:22.594575  239132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:21:22.602495  239132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1228 07:21:22.616161  239132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:21:22.642590  239132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2261 bytes)
	I1228 07:21:22.657113  239132 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:21:22.660941  239132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:21:22.671475  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:22.797073  239132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:21:22.818370  239132 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028 for IP: 192.168.85.2
	I1228 07:21:22.818394  239132 certs.go:195] generating shared ca certs ...
	I1228 07:21:22.818409  239132 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:22.818595  239132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
	I1228 07:21:22.818664  239132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
	I1228 07:21:22.818678  239132 certs.go:257] generating profile certs ...
	I1228 07:21:22.818807  239132 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.key
	I1228 07:21:22.818917  239132 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.key.a35fc308
	I1228 07:21:22.819022  239132 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.key
	I1228 07:21:22.819162  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
	W1228 07:21:22.819233  239132 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
	I1228 07:21:22.819252  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 07:21:22.819297  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:21:22.819345  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:21:22.819376  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
	I1228 07:21:22.819449  239132 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
	I1228 07:21:22.820082  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:21:22.840502  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 07:21:22.859610  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:21:22.877218  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:21:22.903202  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 07:21:22.924065  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:21:22.945243  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:21:22.979691  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 07:21:23.007886  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:21:23.031481  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
	I1228 07:21:23.053849  239132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
	I1228 07:21:23.074129  239132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:21:23.087658  239132 ssh_runner.go:195] Run: openssl version
	I1228 07:21:23.096294  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.104858  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
	I1228 07:21:23.114048  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.118040  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.118182  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
	I1228 07:21:23.164320  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:21:23.174421  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.183267  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
	I1228 07:21:23.191805  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.196207  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.196276  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
	I1228 07:21:23.238514  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:21:23.247772  239132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.255121  239132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:21:23.262956  239132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.267147  239132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.267214  239132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:21:23.308409  239132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:21:23.315795  239132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:21:23.319571  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 07:21:23.362320  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 07:21:23.404099  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 07:21:23.445152  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 07:21:23.487550  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 07:21:23.528965  239132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 07:21:23.581989  239132 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-450028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-450028 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:21:23.582254  239132 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1228 07:21:23.593365  239132 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:21:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1228 07:21:23.593498  239132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:21:23.602959  239132 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 07:21:23.603030  239132 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 07:21:23.603101  239132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 07:21:23.615967  239132 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 07:21:23.616835  239132 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-450028" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:23.617419  239132 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-2380/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-450028" cluster setting kubeconfig missing "default-k8s-diff-port-450028" context setting]
	I1228 07:21:23.618187  239132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.620123  239132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 07:21:23.627973  239132 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 07:21:23.628045  239132 kubeadm.go:602] duration metric: took 24.994987ms to restartPrimaryControlPlane
	I1228 07:21:23.628074  239132 kubeadm.go:403] duration metric: took 46.097204ms to StartCluster
	I1228 07:21:23.628112  239132 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.628183  239132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:21:23.629668  239132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:21:23.629935  239132 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1228 07:21:23.630459  239132 config.go:182] Loaded profile config "default-k8s-diff-port-450028": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:21:23.630511  239132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 07:21:23.630689  239132 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.630730  239132 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.630745  239132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-450028"
	I1228 07:21:23.631079  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.630717  239132 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.631286  239132 addons.go:248] addon storage-provisioner should already be in state true
	I1228 07:21:23.631324  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.631747  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.631941  239132 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.634177  239132 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.634202  239132 addons.go:248] addon metrics-server should already be in state true
	I1228 07:21:23.634257  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.634705  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.631977  239132 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-450028"
	I1228 07:21:23.638250  239132 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.638264  239132 addons.go:248] addon dashboard should already be in state true
	I1228 07:21:23.638307  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.638899  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.634047  239132 out.go:179] * Verifying Kubernetes components...
	I1228 07:21:23.648816  239132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:21:23.697883  239132 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-450028"
	W1228 07:21:23.697903  239132 addons.go:248] addon default-storageclass should already be in state true
	I1228 07:21:23.697927  239132 host.go:66] Checking if "default-k8s-diff-port-450028" exists ...
	I1228 07:21:23.698370  239132 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-450028 --format={{.State.Status}}
	I1228 07:21:23.723458  239132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 07:21:23.723466  239132 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 07:21:23.723558  239132 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1228 07:21:23.727780  239132 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 07:21:23.730019  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 07:21:23.730042  239132 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 07:21:23.730116  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.731590  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 07:21:23.731618  239132 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 07:21:23.731681  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.738921  239132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:21:23.738946  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 07:21:23.739011  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.766524  239132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 07:21:23.766545  239132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 07:21:23.766605  239132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-450028
	I1228 07:21:23.794988  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.807720  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.834757  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:23.842215  239132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/default-k8s-diff-port-450028/id_rsa Username:docker}
	I1228 07:21:24.008665  239132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:21:24.111765  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 07:21:24.111844  239132 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 07:21:24.133230  239132 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-450028" to be "Ready" ...
	I1228 07:21:24.181208  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 07:21:24.181226  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1228 07:21:24.193489  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 07:21:24.193509  239132 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 07:21:24.214149  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 07:21:24.230440  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 07:21:24.239488  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 07:21:24.239562  239132 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 07:21:24.240118  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 07:21:24.240170  239132 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 07:21:24.290157  239132 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:21:24.290235  239132 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 07:21:24.293925  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 07:21:24.293997  239132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 07:21:24.317608  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 07:21:24.340985  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 07:21:24.341059  239132 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 07:21:24.480119  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 07:21:24.480210  239132 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 07:21:24.635171  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 07:21:24.635249  239132 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 07:21:24.760983  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 07:21:24.761075  239132 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 07:21:24.886775  239132 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:21:24.886837  239132 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 07:21:24.912233  239132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 07:21:27.435693  239132 node_ready.go:49] node "default-k8s-diff-port-450028" is "Ready"
	I1228 07:21:27.435720  239132 node_ready.go:38] duration metric: took 3.302403514s for node "default-k8s-diff-port-450028" to be "Ready" ...
	I1228 07:21:27.435732  239132 api_server.go:52] waiting for apiserver process to appear ...
	I1228 07:21:27.435795  239132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 07:21:27.842913  239132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.628730154s)
	I1228 07:21:33.407319  239132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.176805449s)
	I1228 07:21:33.407418  239132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.089736387s)
	I1228 07:21:33.407430  239132 addons.go:495] Verifying addon metrics-server=true in "default-k8s-diff-port-450028"
	I1228 07:21:33.407528  239132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.495218728s)
	I1228 07:21:33.407663  239132 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.971857485s)
	I1228 07:21:33.407677  239132 api_server.go:72] duration metric: took 9.777585112s to wait for apiserver process to appear ...
	I1228 07:21:33.407683  239132 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:21:33.407711  239132 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1228 07:21:33.410946  239132 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-450028 addons enable metrics-server
	
	I1228 07:21:33.415019  239132 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	0a7f436a37ce2       ba04bb24b9575       11 seconds ago       Running             storage-provisioner       2                   cb90502878e38       storage-provisioner                          kube-system
	4e29295da0b64       20b332c9a70d8       50 seconds ago       Running             kubernetes-dashboard      0                   7bfa615ce5447       kubernetes-dashboard-b84665fb8-lg5dt         kubernetes-dashboard
	5482ab6d9de61       e08f4d9d2e6ed       55 seconds ago       Running             coredns                   1                   e01643a161aca       coredns-7d764666f9-p9hf5                     kube-system
	74af3ffe86a81       1611cd07b61d5       55 seconds ago       Running             busybox                   1                   c1ee8c39f757c       busybox                                      default
	31a84ebc7c901       ba04bb24b9575       56 seconds ago       Exited              storage-provisioner       1                   cb90502878e38       storage-provisioner                          kube-system
	8d2288015d0bc       de369f46c2ff5       56 seconds ago       Running             kube-proxy                1                   ee63d0e36a56f       kube-proxy-r6p5h                             kube-system
	4d1dec8f2f55a       c96ee3c174987       56 seconds ago       Running             kindnet-cni               1                   649e9f40fbab0       kindnet-tvkjv                                kube-system
	83523225b03dd       271e49a0ebc56       About a minute ago   Running             etcd                      1                   8cfeea1024cbb       etcd-embed-certs-468470                      kube-system
	b3a18272c5dc5       88898f1d1a62a       About a minute ago   Running             kube-controller-manager   1                   30d966d38f202       kube-controller-manager-embed-certs-468470   kube-system
	c0659d22eae27       c3fcf259c473a       About a minute ago   Running             kube-apiserver            1                   c32b7e15f3fbb       kube-apiserver-embed-certs-468470            kube-system
	54f3cd29fa370       ddc8422d4d35a       About a minute ago   Running             kube-scheduler            1                   b078869f7145f       kube-scheduler-embed-certs-468470            kube-system
	fb6b73cd22904       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   93bc160146f80       busybox                                      default
	ffdd521b39349       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   fc790c415f558       coredns-7d764666f9-p9hf5                     kube-system
	06df275ad107c       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   b7552867da407       kindnet-tvkjv                                kube-system
	7aac017ef5789       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   63919bfe01f02       kube-proxy-r6p5h                             kube-system
	830828e98568c       c3fcf259c473a       2 minutes ago        Exited              kube-apiserver            0                   631e0b1626323       kube-apiserver-embed-certs-468470            kube-system
	f8a6e4f4f99b1       ddc8422d4d35a       2 minutes ago        Exited              kube-scheduler            0                   79f78abaf4a6d       kube-scheduler-embed-certs-468470            kube-system
	b1ea13590f635       271e49a0ebc56       2 minutes ago        Exited              etcd                      0                   4f94157e5b830       etcd-embed-certs-468470                      kube-system
	bceb11158ba9c       88898f1d1a62a       2 minutes ago        Exited              kube-controller-manager   0                   8622289c8fd4b       kube-controller-manager-embed-certs-468470   kube-system
	
	
	==> containerd <==
	Dec 28 07:21:25 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:25.608868888Z" level=info msg="StartContainer for \"0a7f436a37ce29649ddff246998a4a9ead9c1a7bf800e626973278a53a545616\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.515843555Z" level=info msg="StopPodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.516388413Z" level=info msg="TearDown network for sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.516436848Z" level=info msg="StopPodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567111020Z" level=info msg="RemovePodSandbox for \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567164838Z" level=info msg="Forcibly stopping sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.567613597Z" level=info msg="TearDown network for sandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.571189046Z" level=info msg="Ensure that sandbox 198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a in task-service has been cleanup successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.585186807Z" level=info msg="RemovePodSandbox \"198ef6a0bb05a19f2bb6b23a82edcecc5886b66f3219f81d1dcd49f91a7fcd7a\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.592352392Z" level=info msg="StopPodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.638908062Z" level=info msg="TearDown network for sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.639092531Z" level=info msg="StopPodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" returns successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.683754654Z" level=info msg="RemovePodSandbox for \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.683808948Z" level=info msg="Forcibly stopping sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\""
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.754492888Z" level=info msg="TearDown network for sandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.769407917Z" level=info msg="Ensure that sandbox 2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0 in task-service has been cleanup successfully"
	Dec 28 07:21:31 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:31.781104004Z" level=info msg="RemovePodSandbox \"2938800784483bdd9f6b9c3765ff4d02b3343263ac6945de58a325d9bd3adfa0\" returns successfully"
	Dec 28 07:21:32 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:32.845452504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.578998936Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.589986283Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.596289099Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.596737079Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.599323612Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.776545645Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:21:33 embed-certs-468470 containerd[555]: time="2025-12-28T07:21:33.776689137Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> describe nodes <==
	Name:               embed-certs-468470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-468470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-468470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_19_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:19:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-468470
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:21:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:21:32 +0000   Sun, 28 Dec 2025 07:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-468470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                5a0fe077-79c0-4df2-9225-886171cbf73f
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-p9hf5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-embed-certs-468470                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-tvkjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-embed-certs-468470             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-468470    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-r6p5h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-embed-certs-468470             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 metrics-server-5d785b57d4-8pr72               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         86s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-66nxs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-lg5dt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node embed-certs-468470 event: Registered Node embed-certs-468470 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node embed-certs-468470 event: Registered Node embed-certs-468470 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:21:37 up  1:04,  0 user,  load average: 4.77, 2.42, 1.99
	Linux embed-certs-468470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290047    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85a98c4ef752f5c7564288baaeedf141-k8s-certs\") pod \"kube-apiserver-embed-certs-468470\" (UID: \"85a98c4ef752f5c7564288baaeedf141\") " pod="kube-system/kube-apiserver-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290336    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-ca-certs\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.290695    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-k8s-certs\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.291109    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85a98c4ef752f5c7564288baaeedf141-etc-ca-certificates\") pod \"kube-apiserver-embed-certs-468470\" (UID: \"85a98c4ef752f5c7564288baaeedf141\") " pod="kube-system/kube-apiserver-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: I1228 07:21:33.291524    2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f6af87c5eae12b92a6b1ac0c70ab3da-usr-share-ca-certificates\") pod \"kube-controller-manager-embed-certs-468470\" (UID: \"5f6af87c5eae12b92a6b1ac0c70ab3da\") " pod="kube-system/kube-controller-manager-embed-certs-468470"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.597311    2408 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.597972    2408 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.598572    2408 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-8pr72_kube-system(804d1ee0-65ff-4c11-b80d-cf83a25cb95e): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.598801    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-8pr72" podUID="804d1ee0-65ff-4c11-b80d-cf83a25cb95e"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.776862    2408 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.776922    2408 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.777170    2408 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-66nxs_kubernetes-dashboard(6aeaf317-9f91-4a95-a416-da6a150160de): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:21:33 embed-certs-468470 kubelet[2408]: E1228 07:21:33.777228    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" podUID="6aeaf317-9f91-4a95-a416-da6a150160de"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.199533    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-468470" containerName="kube-controller-manager"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.200313    2408 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-8pr72" containerName="metrics-server"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.202010    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-468470" containerName="kube-scheduler"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.203630    2408 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" containerName="dashboard-metrics-scraper"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.209083    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-468470" containerName="etcd"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.215595    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-8pr72" podUID="804d1ee0-65ff-4c11-b80d-cf83a25cb95e"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.217117    2408 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p9hf5" containerName="coredns"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.257185    2408 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-66nxs" podUID="6aeaf317-9f91-4a95-a416-da6a150160de"
	Dec 28 07:21:34 embed-certs-468470 kubelet[2408]: E1228 07:21:34.261364    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-468470" containerName="kube-apiserver"
	Dec 28 07:21:35 embed-certs-468470 kubelet[2408]: E1228 07:21:35.201731    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-468470" containerName="kube-scheduler"
	Dec 28 07:21:35 embed-certs-468470 kubelet[2408]: E1228 07:21:35.202113    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-468470" containerName="etcd"
	Dec 28 07:21:35 embed-certs-468470 kubelet[2408]: E1228 07:21:35.202416    2408 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-468470" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-468470 -n embed-certs-468470
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-468470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs: exit status 1 (94.736387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-8pr72" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-66nxs" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-468470 describe pod metrics-server-5d785b57d4-8pr72 dashboard-metrics-scraper-867fb5f87b-66nxs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (9.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-450028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028: exit status 2 (419.031099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028: exit status 2 (330.158765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-450028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-450028
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-450028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95",
	        "Created": "2025-12-28T07:20:09.089074003Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:21:16.555270498Z",
	            "FinishedAt": "2025-12-28T07:21:15.673777987Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/hosts",
	        "LogPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95-json.log",
	        "Name": "/default-k8s-diff-port-450028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-450028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-450028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95",
	                "LowerDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-450028",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-450028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-450028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-450028",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-450028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d9835e49a19ff06c4a9e19fee5c64e3990e194686546d90993df4a569e21b0c",
	            "SandboxKey": "/var/run/docker/netns/7d9835e49a19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-450028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:29:9b:0a:53:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "366690d49bcd5644192e7ee6fd9c624b184936a364f7d4fe3fdeb5f549e299a8",
	                    "EndpointID": "3a4d9a15c3d667daf9986a392088681cd7031d33d454c51be95dac90e4c902ad",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-450028",
	                        "b4942d4dfc92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-450028 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-450028 logs -n 25: (1.252246229s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p force-systemd-flag-257442                                                                                                                                                                                                                        │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ delete  │ -p disable-driver-mounts-120791                                                                                                                                                                                                                     │ disable-driver-mounts-120791 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ stop    │ -p embed-certs-468470 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-468470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ stop    │ -p default-k8s-diff-port-450028 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                                                                                         │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ stop    │ -p newest-cni-205774 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-205774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ image   │ default-k8s-diff-port-450028 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:22:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:22:15.936981  246463 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:22:15.937092  246463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:15.937102  246463 out.go:374] Setting ErrFile to fd 2...
	I1228 07:22:15.937108  246463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:15.937370  246463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:22:15.937756  246463 out.go:368] Setting JSON to false
	I1228 07:22:15.938661  246463 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3886,"bootTime":1766902650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:22:15.938727  246463 start.go:143] virtualization:  
	I1228 07:22:15.943831  246463 out.go:179] * [newest-cni-205774] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:22:15.946893  246463 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:22:15.947000  246463 notify.go:221] Checking for updates...
	I1228 07:22:15.952813  246463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:22:15.955772  246463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:22:15.958698  246463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:22:15.961637  246463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:22:15.964567  246463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:22:15.967876  246463 config.go:182] Loaded profile config "newest-cni-205774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:22:15.968444  246463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:22:16.018487  246463 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:22:16.018642  246463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:16.122372  246463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:22:16.110930883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:16.122566  246463 docker.go:319] overlay module found
	I1228 07:22:16.126088  246463 out.go:179] * Using the docker driver based on existing profile
	I1228 07:22:16.128898  246463 start.go:309] selected driver: docker
	I1228 07:22:16.128921  246463 start.go:928] validating driver "docker" against &{Name:newest-cni-205774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-205774 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:22:16.129019  246463 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:22:16.129716  246463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:16.243201  246463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:22:16.216665258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:16.243574  246463 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:22:16.243598  246463 cni.go:84] Creating CNI manager for ""
	I1228 07:22:16.243649  246463 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:22:16.243681  246463 start.go:353] cluster config:
	{Name:newest-cni-205774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-205774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:22:16.247202  246463 out.go:179] * Starting "newest-cni-205774" primary control-plane node in "newest-cni-205774" cluster
	I1228 07:22:16.250017  246463 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:22:16.252931  246463 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:22:16.255736  246463 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:22:16.255777  246463 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:22:16.255787  246463 cache.go:65] Caching tarball of preloaded images
	I1228 07:22:16.255863  246463 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:22:16.255871  246463 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:22:16.255996  246463 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/newest-cni-205774/config.json ...
	I1228 07:22:16.256187  246463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:22:16.297374  246463 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:22:16.297394  246463 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:22:16.297408  246463 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:22:16.297461  246463 start.go:360] acquireMachinesLock for newest-cni-205774: {Name:mkf76a1f024a6d87156b7715bd2baeb40097c8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:22:16.297534  246463 start.go:364] duration metric: took 46.753µs to acquireMachinesLock for "newest-cni-205774"
	I1228 07:22:16.297559  246463 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:22:16.297565  246463 fix.go:54] fixHost starting: 
	I1228 07:22:16.297850  246463 cli_runner.go:164] Run: docker container inspect newest-cni-205774 --format={{.State.Status}}
	I1228 07:22:16.330757  246463 fix.go:112] recreateIfNeeded on newest-cni-205774: state=Stopped err=<nil>
	W1228 07:22:16.330796  246463 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f967767920996       ba04bb24b9575       7 seconds ago        Running             storage-provisioner       2                   6b8b7b2e83d51       storage-provisioner                                    kube-system
	d72bef542dadd       20b332c9a70d8       42 seconds ago       Running             kubernetes-dashboard      0                   b20e99d56572e       kubernetes-dashboard-b84665fb8-99ssq                   kubernetes-dashboard
	50496cf27ce65       e08f4d9d2e6ed       50 seconds ago       Running             coredns                   1                   768f3abccea79       coredns-7d764666f9-9wrzq                               kube-system
	95f4ff7482659       c96ee3c174987       50 seconds ago       Running             kindnet-cni               1                   3d2dfcd6cf977       kindnet-km6b9                                          kube-system
	7927b3551f47a       ba04bb24b9575       50 seconds ago       Exited              storage-provisioner       1                   6b8b7b2e83d51       storage-provisioner                                    kube-system
	17d414d3aaac6       1611cd07b61d5       50 seconds ago       Running             busybox                   1                   02d171546b8dc       busybox                                                default
	2560d17140679       de369f46c2ff5       51 seconds ago       Running             kube-proxy                1                   b78ecb06712f9       kube-proxy-dkff9                                       kube-system
	0a6f1659594aa       c3fcf259c473a       56 seconds ago       Running             kube-apiserver            1                   22ae2c47c3eaf       kube-apiserver-default-k8s-diff-port-450028            kube-system
	dc42852dd88bb       88898f1d1a62a       56 seconds ago       Running             kube-controller-manager   1                   fffa86ec930c9       kube-controller-manager-default-k8s-diff-port-450028   kube-system
	c30fd5f49fad1       ddc8422d4d35a       56 seconds ago       Running             kube-scheduler            1                   3ccf4604d111d       kube-scheduler-default-k8s-diff-port-450028            kube-system
	ddfb280ec0b02       271e49a0ebc56       56 seconds ago       Running             etcd                      1                   03bc8ae8bd156       etcd-default-k8s-diff-port-450028                      kube-system
	866ffaf6b0cbf       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   1b0f0c88f4e03       busybox                                                default
	2c502268cb3fb       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   9a870eb1ab6b4       coredns-7d764666f9-9wrzq                               kube-system
	bedbc83874fcf       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   9b3e5c6a45827       kindnet-km6b9                                          kube-system
	28fe8816496fc       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   1d0f956a01611       kube-proxy-dkff9                                       kube-system
	2a3d43e7cb0bf       ddc8422d4d35a       About a minute ago   Exited              kube-scheduler            0                   98394257bfa65       kube-scheduler-default-k8s-diff-port-450028            kube-system
	4e9b8dca74314       88898f1d1a62a       About a minute ago   Exited              kube-controller-manager   0                   6cd6f87066f63       kube-controller-manager-default-k8s-diff-port-450028   kube-system
	9bf7704213d3c       c3fcf259c473a       About a minute ago   Exited              kube-apiserver            0                   320f3083317ae       kube-apiserver-default-k8s-diff-port-450028            kube-system
	191fce1e96715       271e49a0ebc56       About a minute ago   Exited              etcd                      0                   b38fecc31a098       etcd-default-k8s-diff-port-450028                      kube-system
	
	
	==> containerd <==
	Dec 28 07:22:16 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:16.221832417Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.725586247Z" level=info msg="StopPodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.757036470Z" level=info msg="TearDown network for sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.757314667Z" level=info msg="StopPodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.768187838Z" level=info msg="RemovePodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.768233697Z" level=info msg="Forcibly stopping sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.792430686Z" level=info msg="TearDown network for sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.797046850Z" level=info msg="Ensure that sandbox 93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059 in task-service has been cleanup successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.812673384Z" level=info msg="RemovePodSandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.815837301Z" level=info msg="StopPodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.816323911Z" level=info msg="TearDown network for sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.816368563Z" level=info msg="StopPodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.817899030Z" level=info msg="RemovePodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.817940811Z" level=info msg="Forcibly stopping sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.818332872Z" level=info msg="TearDown network for sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.821913621Z" level=info msg="Ensure that sandbox 91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab in task-service has been cleanup successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.830686622Z" level=info msg="RemovePodSandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" returns successfully"
	Dec 28 07:22:19 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:19.276722202Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:22:19 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:19.937395386Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.137856685Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.137906007Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.139305519Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.146543170Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.150436078Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.150612753Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-450028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-450028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-450028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_20_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-450028
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:22:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-450028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                dc8dc859-46b2-4c19-b76d-f779e278cc18
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-9wrzq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-default-k8s-diff-port-450028                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-km6b9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-450028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-450028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-dkff9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-450028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 metrics-server-5d785b57d4-pcj9x                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         78s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-4d5lq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-99ssq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node default-k8s-diff-port-450028 event: Registered Node default-k8s-diff-port-450028 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node default-k8s-diff-port-450028 event: Registered Node default-k8s-diff-port-450028 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:22:21 up  1:04,  0 user,  load average: 5.05, 2.88, 2.17
	Linux default-k8s-diff-port-450028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.521874    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6ad0007f0a2d0b3bd9d20f4983c6984-usr-share-ca-certificates\") pod \"kube-apiserver-default-k8s-diff-port-450028\" (UID: \"b6ad0007f0a2d0b3bd9d20f4983c6984\") " pod="kube-system/kube-apiserver-default-k8s-diff-port-450028"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.521892    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d2bd9692cfe3ba5e28a5aa1f93434e1-etc-ca-certificates\") pod \"kube-controller-manager-default-k8s-diff-port-450028\" (UID: \"1d2bd9692cfe3ba5e28a5aa1f93434e1\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-450028"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.615651    2409 apiserver.go:52] "Watching apiserver"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.680271    2409 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725566    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/32b9f51c-9c2f-4dfe-8c46-da25b779d434-tmp\") pod \"storage-provisioner\" (UID: \"32b9f51c-9c2f-4dfe-8c46-da25b779d434\") " pod="kube-system/storage-provisioner"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725609    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbde4ab-e99b-4c87-b9aa-3417c2ddccfd-lib-modules\") pod \"kube-proxy-dkff9\" (UID: \"acbde4ab-e99b-4c87-b9aa-3417c2ddccfd\") " pod="kube-system/kube-proxy-dkff9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725665    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbde4ab-e99b-4c87-b9aa-3417c2ddccfd-xtables-lock\") pod \"kube-proxy-dkff9\" (UID: \"acbde4ab-e99b-4c87-b9aa-3417c2ddccfd\") " pod="kube-system/kube-proxy-dkff9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725733    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-cni-cfg\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725779    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-lib-modules\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725827    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-xtables-lock\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.892535    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-450028" containerName="kube-controller-manager"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.893027    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-450028" containerName="kube-apiserver"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.893742    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-450028" containerName="etcd"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.894010    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-450028" containerName="kube-scheduler"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138389    2409 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138534    2409 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138940    2409 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-4d5lq_kubernetes-dashboard(da33378b-7c6b-4300-a7e4-04be5c27cfd8): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.139585    2409 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4d5lq" podUID="da33378b-7c6b-4300-a7e4-04be5c27cfd8"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.150933    2409 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.152047    2409 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.156624    2409 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-pcj9x_kube-system(3bf5bb7b-42c6-4383-9c4a-84cb74cca85b): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.157534    2409 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-pcj9x" podUID="3bf5bb7b-42c6-4383-9c4a-84cb74cca85b"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895002    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-450028" containerName="kube-apiserver"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895806    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-450028" containerName="kube-scheduler"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895894    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-450028" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq: exit status 1 (110.758124ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-pcj9x" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-4d5lq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-450028
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-450028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95",
	        "Created": "2025-12-28T07:20:09.089074003Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:21:16.555270498Z",
	            "FinishedAt": "2025-12-28T07:21:15.673777987Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/hosts",
	        "LogPath": "/var/lib/docker/containers/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95/b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95-json.log",
	        "Name": "/default-k8s-diff-port-450028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-450028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-450028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b4942d4dfc92e96fbc2aba859b4bf620d47d25d971d6e6f74fcb1a31f834fa95",
	                "LowerDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/952a0a1921ee6a9d9cc4acbd542f32f1ecb8bc58280fee5b71ce1d50171526c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-450028",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-450028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-450028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-450028",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-450028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d9835e49a19ff06c4a9e19fee5c64e3990e194686546d90993df4a569e21b0c",
	            "SandboxKey": "/var/run/docker/netns/7d9835e49a19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-450028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:29:9b:0a:53:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "366690d49bcd5644192e7ee6fd9c624b184936a364f7d4fe3fdeb5f549e299a8",
	                    "EndpointID": "3a4d9a15c3d667daf9986a392088681cd7031d33d454c51be95dac90e4c902ad",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-450028",
	                        "b4942d4dfc92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-450028 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-450028 logs -n 25: (1.142383047s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
	│ delete  │ -p force-systemd-flag-257442                                                                                                                                                                                                                        │ force-systemd-flag-257442    │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ delete  │ -p disable-driver-mounts-120791                                                                                                                                                                                                                     │ disable-driver-mounts-120791 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ stop    │ -p embed-certs-468470 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-468470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:20 UTC │
	│ start   │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:20 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ stop    │ -p default-k8s-diff-port-450028 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                                                                                         │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470           │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ stop    │ -p newest-cni-205774 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-205774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774            │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ image   │ default-k8s-diff-port-450028 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:22:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:22:15.936981  246463 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:22:15.937092  246463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:15.937102  246463 out.go:374] Setting ErrFile to fd 2...
	I1228 07:22:15.937108  246463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:15.937370  246463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:22:15.937756  246463 out.go:368] Setting JSON to false
	I1228 07:22:15.938661  246463 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3886,"bootTime":1766902650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:22:15.938727  246463 start.go:143] virtualization:  
	I1228 07:22:15.943831  246463 out.go:179] * [newest-cni-205774] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:22:15.946893  246463 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:22:15.947000  246463 notify.go:221] Checking for updates...
	I1228 07:22:15.952813  246463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:22:15.955772  246463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:22:15.958698  246463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:22:15.961637  246463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:22:15.964567  246463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:22:15.967876  246463 config.go:182] Loaded profile config "newest-cni-205774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:22:15.968444  246463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:22:16.018487  246463 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:22:16.018642  246463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:16.122372  246463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:22:16.110930883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:16.122566  246463 docker.go:319] overlay module found
	I1228 07:22:16.126088  246463 out.go:179] * Using the docker driver based on existing profile
	I1228 07:22:16.128898  246463 start.go:309] selected driver: docker
	I1228 07:22:16.128921  246463 start.go:928] validating driver "docker" against &{Name:newest-cni-205774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-205774 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:22:16.129019  246463 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:22:16.129716  246463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:16.243201  246463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:22:16.216665258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:16.243574  246463 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:22:16.243598  246463 cni.go:84] Creating CNI manager for ""
	I1228 07:22:16.243649  246463 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:22:16.243681  246463 start.go:353] cluster config:
	{Name:newest-cni-205774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-205774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:22:16.247202  246463 out.go:179] * Starting "newest-cni-205774" primary control-plane node in "newest-cni-205774" cluster
	I1228 07:22:16.250017  246463 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:22:16.252931  246463 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:22:16.255736  246463 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1228 07:22:16.255777  246463 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1228 07:22:16.255787  246463 cache.go:65] Caching tarball of preloaded images
	I1228 07:22:16.255863  246463 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:22:16.255871  246463 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1228 07:22:16.255996  246463 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/newest-cni-205774/config.json ...
	I1228 07:22:16.256187  246463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:22:16.297374  246463 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:22:16.297394  246463 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:22:16.297408  246463 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:22:16.297461  246463 start.go:360] acquireMachinesLock for newest-cni-205774: {Name:mkf76a1f024a6d87156b7715bd2baeb40097c8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:22:16.297534  246463 start.go:364] duration metric: took 46.753µs to acquireMachinesLock for "newest-cni-205774"
	I1228 07:22:16.297559  246463 start.go:96] Skipping create...Using existing machine configuration
	I1228 07:22:16.297565  246463 fix.go:54] fixHost starting: 
	I1228 07:22:16.297850  246463 cli_runner.go:164] Run: docker container inspect newest-cni-205774 --format={{.State.Status}}
	I1228 07:22:16.330757  246463 fix.go:112] recreateIfNeeded on newest-cni-205774: state=Stopped err=<nil>
	W1228 07:22:16.330796  246463 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 07:22:16.334594  246463 out.go:252] * Restarting existing docker container for "newest-cni-205774" ...
	I1228 07:22:16.334692  246463 cli_runner.go:164] Run: docker start newest-cni-205774
	I1228 07:22:16.685114  246463 cli_runner.go:164] Run: docker container inspect newest-cni-205774 --format={{.State.Status}}
	I1228 07:22:16.724296  246463 kic.go:430] container "newest-cni-205774" state is running.
	I1228 07:22:16.724776  246463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-205774
	I1228 07:22:16.751375  246463 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/newest-cni-205774/config.json ...
	I1228 07:22:16.751632  246463 machine.go:94] provisionDockerMachine start ...
	I1228 07:22:16.751705  246463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-205774
	I1228 07:22:16.776303  246463 main.go:144] libmachine: Using SSH client type: native
	I1228 07:22:16.776727  246463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1228 07:22:16.776746  246463 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:22:16.777356  246463 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39848->127.0.0.1:33100: read: connection reset by peer
	I1228 07:22:19.952080  246463 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-205774
	
	I1228 07:22:19.952108  246463 ubuntu.go:182] provisioning hostname "newest-cni-205774"
	I1228 07:22:19.952175  246463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-205774
	I1228 07:22:20.000034  246463 main.go:144] libmachine: Using SSH client type: native
	I1228 07:22:20.000364  246463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1228 07:22:20.000376  246463 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-205774 && echo "newest-cni-205774" | sudo tee /etc/hostname
	I1228 07:22:20.191794  246463 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-205774
	
	I1228 07:22:20.191909  246463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-205774
	I1228 07:22:20.218486  246463 main.go:144] libmachine: Using SSH client type: native
	I1228 07:22:20.218846  246463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1228 07:22:20.218878  246463 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-205774' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-205774/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-205774' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:22:20.376384  246463 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:22:20.376541  246463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
	I1228 07:22:20.376590  246463 ubuntu.go:190] setting up certificates
	I1228 07:22:20.376627  246463 provision.go:84] configureAuth start
	I1228 07:22:20.376716  246463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-205774
	I1228 07:22:20.406234  246463 provision.go:143] copyHostCerts
	I1228 07:22:20.406293  246463 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
	I1228 07:22:20.406309  246463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
	I1228 07:22:20.406387  246463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
	I1228 07:22:20.406493  246463 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
	I1228 07:22:20.406504  246463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
	I1228 07:22:20.406531  246463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
	I1228 07:22:20.406589  246463 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
	I1228 07:22:20.406594  246463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
	I1228 07:22:20.406617  246463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
	I1228 07:22:20.406669  246463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.newest-cni-205774 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-205774]
	I1228 07:22:20.594755  246463 provision.go:177] copyRemoteCerts
	I1228 07:22:20.594883  246463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:22:20.594977  246463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-205774
	I1228 07:22:20.615326  246463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/newest-cni-205774/id_rsa Username:docker}
	I1228 07:22:20.712906  246463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:22:20.734120  246463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 07:22:20.758140  246463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:22:20.787771  246463 provision.go:87] duration metric: took 411.109125ms to configureAuth
	I1228 07:22:20.787847  246463 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:22:20.788095  246463 config.go:182] Loaded profile config "newest-cni-205774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:22:20.788125  246463 machine.go:97] duration metric: took 4.036483834s to provisionDockerMachine
	I1228 07:22:20.788160  246463 start.go:293] postStartSetup for "newest-cni-205774" (driver="docker")
	I1228 07:22:20.788190  246463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:22:20.788267  246463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:22:20.788350  246463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-205774
	I1228 07:22:20.812703  246463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/newest-cni-205774/id_rsa Username:docker}
	I1228 07:22:20.913654  246463 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:22:20.917804  246463 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:22:20.917829  246463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:22:20.917840  246463 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
	I1228 07:22:20.917892  246463 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
	I1228 07:22:20.917969  246463 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
	I1228 07:22:20.918071  246463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:22:20.925664  246463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f967767920996       ba04bb24b9575       9 seconds ago        Running             storage-provisioner       2                   6b8b7b2e83d51       storage-provisioner                                    kube-system
	d72bef542dadd       20b332c9a70d8       44 seconds ago       Running             kubernetes-dashboard      0                   b20e99d56572e       kubernetes-dashboard-b84665fb8-99ssq                   kubernetes-dashboard
	50496cf27ce65       e08f4d9d2e6ed       52 seconds ago       Running             coredns                   1                   768f3abccea79       coredns-7d764666f9-9wrzq                               kube-system
	95f4ff7482659       c96ee3c174987       52 seconds ago       Running             kindnet-cni               1                   3d2dfcd6cf977       kindnet-km6b9                                          kube-system
	7927b3551f47a       ba04bb24b9575       53 seconds ago       Exited              storage-provisioner       1                   6b8b7b2e83d51       storage-provisioner                                    kube-system
	17d414d3aaac6       1611cd07b61d5       53 seconds ago       Running             busybox                   1                   02d171546b8dc       busybox                                                default
	2560d17140679       de369f46c2ff5       53 seconds ago       Running             kube-proxy                1                   b78ecb06712f9       kube-proxy-dkff9                                       kube-system
	0a6f1659594aa       c3fcf259c473a       58 seconds ago       Running             kube-apiserver            1                   22ae2c47c3eaf       kube-apiserver-default-k8s-diff-port-450028            kube-system
	dc42852dd88bb       88898f1d1a62a       58 seconds ago       Running             kube-controller-manager   1                   fffa86ec930c9       kube-controller-manager-default-k8s-diff-port-450028   kube-system
	c30fd5f49fad1       ddc8422d4d35a       58 seconds ago       Running             kube-scheduler            1                   3ccf4604d111d       kube-scheduler-default-k8s-diff-port-450028            kube-system
	ddfb280ec0b02       271e49a0ebc56       59 seconds ago       Running             etcd                      1                   03bc8ae8bd156       etcd-default-k8s-diff-port-450028                      kube-system
	866ffaf6b0cbf       1611cd07b61d5       About a minute ago   Exited              busybox                   0                   1b0f0c88f4e03       busybox                                                default
	2c502268cb3fb       e08f4d9d2e6ed       About a minute ago   Exited              coredns                   0                   9a870eb1ab6b4       coredns-7d764666f9-9wrzq                               kube-system
	bedbc83874fcf       c96ee3c174987       About a minute ago   Exited              kindnet-cni               0                   9b3e5c6a45827       kindnet-km6b9                                          kube-system
	28fe8816496fc       de369f46c2ff5       About a minute ago   Exited              kube-proxy                0                   1d0f956a01611       kube-proxy-dkff9                                       kube-system
	2a3d43e7cb0bf       ddc8422d4d35a       About a minute ago   Exited              kube-scheduler            0                   98394257bfa65       kube-scheduler-default-k8s-diff-port-450028            kube-system
	4e9b8dca74314       88898f1d1a62a       About a minute ago   Exited              kube-controller-manager   0                   6cd6f87066f63       kube-controller-manager-default-k8s-diff-port-450028   kube-system
	9bf7704213d3c       c3fcf259c473a       About a minute ago   Exited              kube-apiserver            0                   320f3083317ae       kube-apiserver-default-k8s-diff-port-450028            kube-system
	191fce1e96715       271e49a0ebc56       About a minute ago   Exited              etcd                      0                   b38fecc31a098       etcd-default-k8s-diff-port-450028                      kube-system
	
	
	==> containerd <==
	Dec 28 07:22:16 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:16.221832417Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.725586247Z" level=info msg="StopPodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.757036470Z" level=info msg="TearDown network for sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.757314667Z" level=info msg="StopPodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.768187838Z" level=info msg="RemovePodSandbox for \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.768233697Z" level=info msg="Forcibly stopping sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.792430686Z" level=info msg="TearDown network for sandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.797046850Z" level=info msg="Ensure that sandbox 93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059 in task-service has been cleanup successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.812673384Z" level=info msg="RemovePodSandbox \"93aac038fc9ffae2a6ffab50ee485493d0b13d0995062d9915cdc32f3994c059\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.815837301Z" level=info msg="StopPodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.816323911Z" level=info msg="TearDown network for sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.816368563Z" level=info msg="StopPodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" returns successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.817899030Z" level=info msg="RemovePodSandbox for \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.817940811Z" level=info msg="Forcibly stopping sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\""
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.818332872Z" level=info msg="TearDown network for sandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.821913621Z" level=info msg="Ensure that sandbox 91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab in task-service has been cleanup successfully"
	Dec 28 07:22:18 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:18.830686622Z" level=info msg="RemovePodSandbox \"91d7d3f5551adcf8c5259304af1a878fce86f2abc6d70c4abc4a97cd70bcafab\" returns successfully"
	Dec 28 07:22:19 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:19.276722202Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 28 07:22:19 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:19.937395386Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.4\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.137856685Z" level=error msg="PullImage \"registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.137906007Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.139305519Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.146543170Z" level=info msg="fetch failed" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.150436078Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 28 07:22:20 default-k8s-diff-port-450028 containerd[555]: time="2025-12-28T07:22:20.150612753Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-450028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-450028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-450028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_20_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-450028
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:22:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 07:22:19 +0000   Sun, 28 Dec 2025 07:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-450028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                dc8dc859-46b2-4c19-b76d-f779e278cc18
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-9wrzq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-default-k8s-diff-port-450028                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-km6b9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-450028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-450028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-dkff9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-450028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 metrics-server-5d785b57d4-pcj9x                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         80s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-4d5lq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-99ssq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node default-k8s-diff-port-450028 event: Registered Node default-k8s-diff-port-450028 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-450028 event: Registered Node default-k8s-diff-port-450028 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:22:23 up  1:04,  0 user,  load average: 5.05, 2.88, 2.17
	Linux default-k8s-diff-port-450028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.521892    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d2bd9692cfe3ba5e28a5aa1f93434e1-etc-ca-certificates\") pod \"kube-controller-manager-default-k8s-diff-port-450028\" (UID: \"1d2bd9692cfe3ba5e28a5aa1f93434e1\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-450028"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.615651    2409 apiserver.go:52] "Watching apiserver"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.680271    2409 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725566    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/32b9f51c-9c2f-4dfe-8c46-da25b779d434-tmp\") pod \"storage-provisioner\" (UID: \"32b9f51c-9c2f-4dfe-8c46-da25b779d434\") " pod="kube-system/storage-provisioner"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725609    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbde4ab-e99b-4c87-b9aa-3417c2ddccfd-lib-modules\") pod \"kube-proxy-dkff9\" (UID: \"acbde4ab-e99b-4c87-b9aa-3417c2ddccfd\") " pod="kube-system/kube-proxy-dkff9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725665    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbde4ab-e99b-4c87-b9aa-3417c2ddccfd-xtables-lock\") pod \"kube-proxy-dkff9\" (UID: \"acbde4ab-e99b-4c87-b9aa-3417c2ddccfd\") " pod="kube-system/kube-proxy-dkff9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725733    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-cni-cfg\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725779    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-lib-modules\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: I1228 07:22:19.725827    2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c57fcd6-7bd3-41f0-9a10-3a0f98074b32-xtables-lock\") pod \"kindnet-km6b9\" (UID: \"1c57fcd6-7bd3-41f0-9a10-3a0f98074b32\") " pod="kube-system/kindnet-km6b9"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.892535    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-450028" containerName="kube-controller-manager"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.893027    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-450028" containerName="kube-apiserver"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.893742    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-450028" containerName="etcd"
	Dec 28 07:22:19 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:19.894010    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-450028" containerName="kube-scheduler"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138389    2409 log.go:32] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138534    2409 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" image="registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.138940    2409 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-4d5lq_kubernetes-dashboard(da33378b-7c6b-4300-a7e4-04be5c27cfd8): ErrImagePull: rpc error: code = Unimplemented desc = failed to pull and unpack image \"registry.k8s.io/echoserver:1.4\": not implemented: media type \"application/vnd.docker.distribution.manifest.v1+prettyjws\" is no longer supported since containerd v2.1, please rebuild the image as \"application/vnd.docker.distribution.manifest.v2+json\" or \"application/vnd.oci.image.manifest.v1+json\"" logger="UnhandledError"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.139585    2409 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unimplemented desc = failed to pull and unpack image \\\"registry.k8s.io/echoserver:1.4\\\": not implemented: media type \\\"application/vnd.docker.distribution.manifest.v1+prettyjws\\\" is no longer supported since containerd v2.1, please rebuild the image as \\\"application/vnd.docker.distribution.manifest.v2+json\\\" or \\\"application/vnd.oci.image.manifest.v1+json\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-4d5lq" podUID="da33378b-7c6b-4300-a7e4-04be5c27cfd8"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.150933    2409 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.152047    2409 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.156624    2409 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-pcj9x_kube-system(3bf5bb7b-42c6-4383-9c4a-84cb74cca85b): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.157534    2409 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-pcj9x" podUID="3bf5bb7b-42c6-4383-9c4a-84cb74cca85b"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895002    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-450028" containerName="kube-apiserver"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895806    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-450028" containerName="kube-scheduler"
	Dec 28 07:22:20 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:20.895894    2409 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-450028" containerName="etcd"
	Dec 28 07:22:21 default-k8s-diff-port-450028 kubelet[2409]: E1228 07:22:21.353034    2409 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9wrzq" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq: exit status 1 (149.492036ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-pcj9x" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-4d5lq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-450028 describe pod metrics-server-5d785b57d4-pcj9x dashboard-metrics-scraper-867fb5f87b-4d5lq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-205774 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-205774 -n newest-cni-205774
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-205774 -n newest-cni-205774: exit status 2 (444.405335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Running"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-205774 -n newest-cni-205774
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-205774 -n newest-cni-205774: exit status 2 (453.115139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-205774 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-205774 -n newest-cni-205774
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-205774 -n newest-cni-205774
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-205774
helpers_test.go:244: (dbg) docker inspect newest-cni-205774:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f",
	        "Created": "2025-12-28T07:21:46.788269683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:22:16.368529836Z",
	            "FinishedAt": "2025-12-28T07:22:15.202443176Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/hosts",
	        "LogPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f-json.log",
	        "Name": "/newest-cni-205774",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-205774:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-205774",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f",
	                "LowerDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-205774",
	                "Source": "/var/lib/docker/volumes/newest-cni-205774/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-205774",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-205774",
	                "name.minikube.sigs.k8s.io": "newest-cni-205774",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "921bfd8535a597c4e81c2a5a5dd6483553d5dd3ea5783701b22ccff1b139e862",
	            "SandboxKey": "/var/run/docker/netns/921bfd8535a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-205774": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:2f:f0:48:e2:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5461180d41c8930319db01bd3fd5c7f93122e8a433f052e8fbf9abaf649c9cf",
	                    "EndpointID": "d812ffd497758d8cf735b8ffd01bece7f2574fb1b1aa8a8e521877bc906a9cd4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-205774",
	                        "0cfd8f8a662c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-205774 -n newest-cni-205774
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-205774 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-205774 logs -n 25: (1.203241399s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ stop    │ -p default-k8s-diff-port-450028 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                                                                                         │ embed-certs-468470            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470            │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ stop    │ -p newest-cni-205774 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-205774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ image   │ default-k8s-diff-port-450028 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ delete  │ -p default-k8s-diff-port-450028                                                                                                                                                                                                                     │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ delete  │ -p default-k8s-diff-port-450028                                                                                                                                                                                                                     │ default-k8s-diff-port-450028  │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p test-preload-dl-gcs-209495 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-209495    │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-209495                                                                                                                                                                                                                       │ test-preload-dl-gcs-209495    │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p test-preload-dl-github-675713 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-675713 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ image   │ newest-cni-205774 image list --format=json                                                                                                                                                                                                          │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p newest-cni-205774 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p newest-cni-205774 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-205774             │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:22:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:22:33.614050  250107 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:22:33.614214  250107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:33.614236  250107 out.go:374] Setting ErrFile to fd 2...
	I1228 07:22:33.614255  250107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:33.614539  250107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:22:33.614981  250107 out.go:368] Setting JSON to false
	I1228 07:22:33.615856  250107 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3904,"bootTime":1766902650,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:22:33.615947  250107 start.go:143] virtualization:  
	I1228 07:22:33.619400  250107 out.go:179] * [test-preload-dl-github-675713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:22:33.623209  250107 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:22:33.623289  250107 notify.go:221] Checking for updates...
	I1228 07:22:33.630037  250107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:22:33.632976  250107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:22:33.636000  250107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:22:33.639216  250107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:22:33.642109  250107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:22:33.646506  250107 config.go:182] Loaded profile config "newest-cni-205774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:22:33.646656  250107 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:22:33.687966  250107 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:22:33.688091  250107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:33.805397  250107 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 07:22:33.791425205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:33.805501  250107 docker.go:319] overlay module found
	I1228 07:22:33.808477  250107 out.go:179] * Using the docker driver based on user configuration
	I1228 07:22:33.811239  250107 start.go:309] selected driver: docker
	I1228 07:22:33.811260  250107 start.go:928] validating driver "docker" against <nil>
	I1228 07:22:33.811369  250107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:22:33.928383  250107 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 07:22:33.917360996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:22:33.928632  250107 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:22:33.928883  250107 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1228 07:22:33.929034  250107 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:22:33.932201  250107 out.go:179] * Using Docker driver with root privileges
	I1228 07:22:33.935166  250107 cni.go:84] Creating CNI manager for ""
	I1228 07:22:33.935238  250107 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 07:22:33.935247  250107 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 07:22:33.935322  250107 start.go:353] cluster config:
	{Name:test-preload-dl-github-675713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-675713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s Rosetta:false}
	I1228 07:22:33.940381  250107 out.go:179] * Starting "test-preload-dl-github-675713" primary control-plane node in "test-preload-dl-github-675713" cluster
	I1228 07:22:33.944571  250107 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 07:22:33.948221  250107 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:22:34.170745  246463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.535417159s)
	I1228 07:22:34.170800  246463 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.460975574s)
	I1228 07:22:34.170812  246463 api_server.go:72] duration metric: took 9.564520197s to wait for apiserver process to appear ...
	I1228 07:22:34.170818  246463 api_server.go:88] waiting for apiserver healthz status ...
	I1228 07:22:34.170834  246463 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 07:22:34.171132  246463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.309426936s)
	I1228 07:22:34.195883  246463 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 07:22:34.197347  246463 api_server.go:141] control plane version: v1.35.0
	I1228 07:22:34.197414  246463 api_server.go:131] duration metric: took 26.589733ms to wait for apiserver health ...
	I1228 07:22:34.197439  246463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 07:22:34.206149  246463 system_pods.go:59] 9 kube-system pods found
	I1228 07:22:34.206239  246463 system_pods.go:61] "coredns-7d764666f9-b4wgx" [64e03e8d-72e6-479e-8df8-b144a4ea3a7e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:22:34.206303  246463 system_pods.go:61] "etcd-newest-cni-205774" [ab34498a-7e51-44ff-bfa6-38c2ab436332] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 07:22:34.206334  246463 system_pods.go:61] "kindnet-25mjx" [f3695d38-8b23-47ff-ad74-34792577851c] Running
	I1228 07:22:34.206358  246463 system_pods.go:61] "kube-apiserver-newest-cni-205774" [4ebeb32c-6604-4c67-8203-a6b9df17b337] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 07:22:34.206392  246463 system_pods.go:61] "kube-controller-manager-newest-cni-205774" [07653101-9566-4444-ac13-19ee68756d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 07:22:34.206414  246463 system_pods.go:61] "kube-proxy-7rtd9" [2e23f9e7-893d-4f50-adb4-3b60d08713e2] Running
	I1228 07:22:34.206438  246463 system_pods.go:61] "kube-scheduler-newest-cni-205774" [b7635d59-c91f-44d5-aadb-ccd8f2948434] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 07:22:34.206472  246463 system_pods.go:61] "metrics-server-5d785b57d4-bhpwq" [7d4889b7-6f47-413b-b782-6a62a2a94f3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:22:34.206495  246463 system_pods.go:61] "storage-provisioner" [bf7a263d-a021-423d-b723-60aa2f75e0ae] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 07:22:34.206517  246463 system_pods.go:74] duration metric: took 9.058558ms to wait for pod list to return data ...
	I1228 07:22:34.206549  246463 default_sa.go:34] waiting for default service account to be created ...
	I1228 07:22:34.209611  246463 default_sa.go:45] found service account: "default"
	I1228 07:22:34.209683  246463 default_sa.go:55] duration metric: took 3.114711ms for default service account to be created ...
	I1228 07:22:34.209711  246463 kubeadm.go:587] duration metric: took 9.603416951s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 07:22:34.209752  246463 node_conditions.go:102] verifying NodePressure condition ...
	I1228 07:22:34.212811  246463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1228 07:22:34.212890  246463 node_conditions.go:123] node cpu capacity is 2
	I1228 07:22:34.212920  246463 node_conditions.go:105] duration metric: took 3.143577ms to run NodePressure ...
	I1228 07:22:34.212945  246463 start.go:242] waiting for startup goroutines ...
	I1228 07:22:34.234578  246463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.656433272s)
	I1228 07:22:34.234612  246463 addons.go:495] Verifying addon metrics-server=true in "newest-cni-205774"
	I1228 07:22:34.234712  246463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.288338157s)
	I1228 07:22:34.237806  246463 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-205774 addons enable metrics-server
	
	I1228 07:22:34.240667  246463 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1228 07:22:34.243517  246463 addons.go:530] duration metric: took 9.636822718s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1228 07:22:34.243555  246463 start.go:247] waiting for cluster config update ...
	I1228 07:22:34.243582  246463 start.go:256] writing updated cluster config ...
	I1228 07:22:34.243871  246463 ssh_runner.go:195] Run: rm -f paused
	I1228 07:22:34.308450  246463 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1228 07:22:34.311555  246463 out.go:203] 
	W1228 07:22:34.314400  246463 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1228 07:22:34.317218  246463 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1228 07:22:34.320161  246463 out.go:179] * Done! kubectl is now configured to use "newest-cni-205774" cluster and "default" namespace by default
	I1228 07:22:33.951054  250107 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:22:33.951320  250107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:22:33.973231  250107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:22:33.973255  250107 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 07:22:33.973330  250107 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 07:22:33.973353  250107 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 07:22:33.973362  250107 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 07:22:33.973369  250107 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 07:22:34.480449  250107 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I1228 07:22:34.480495  250107 cache.go:65] Caching tarball of preloaded images
	I1228 07:22:34.480636  250107 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime containerd
	I1228 07:22:34.483636  250107 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	I1228 07:22:34.486806  250107 preload.go:269] Downloading preload from https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I1228 07:22:34.486828  250107 preload.go:347] getting checksum for preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-arm64.tar.lz4 from github api...
	I1228 07:22:34.965810  250107 preload.go:316] Got checksum from Github API "e849eacb4e90857309d4ab9943956af2b68220f2398135629510b1694c37dd71"
	I1228 07:22:34.965917  250107 download.go:114] Downloading: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=sha256:e849eacb4e90857309d4ab9943956af2b68220f2398135629510b1694c37dd71 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	27de065ebcaaa       c96ee3c174987       7 seconds ago       Running             kindnet-cni               1                   bcab27995f718       kindnet-25mjx                               kube-system
	aca3e1c12ea3f       de369f46c2ff5       7 seconds ago       Running             kube-proxy                1                   f348cc812939d       kube-proxy-7rtd9                            kube-system
	13e953a5db4e5       88898f1d1a62a       14 seconds ago      Running             kube-controller-manager   1                   9ad52d814ea6f       kube-controller-manager-newest-cni-205774   kube-system
	b03fce029c009       c3fcf259c473a       14 seconds ago      Running             kube-apiserver            1                   9921fbb661775       kube-apiserver-newest-cni-205774            kube-system
	38219e8929254       ddc8422d4d35a       14 seconds ago      Running             kube-scheduler            1                   f49469005aa67       kube-scheduler-newest-cni-205774            kube-system
	06f65bcae840d       271e49a0ebc56       14 seconds ago      Running             etcd                      1                   2b7d7917a2077       etcd-newest-cni-205774                      kube-system
	edaf51bc6e624       c96ee3c174987       25 seconds ago      Exited              kindnet-cni               0                   8fa961094d5de       kindnet-25mjx                               kube-system
	ef1607c6ac644       de369f46c2ff5       27 seconds ago      Exited              kube-proxy                0                   86e11d943bb50       kube-proxy-7rtd9                            kube-system
	4f42e043e48a9       88898f1d1a62a       39 seconds ago      Exited              kube-controller-manager   0                   be30139d61b80       kube-controller-manager-newest-cni-205774   kube-system
	7ce4a777ed519       ddc8422d4d35a       39 seconds ago      Exited              kube-scheduler            0                   cbe3f02a12a8c       kube-scheduler-newest-cni-205774            kube-system
	2d301be485e6e       271e49a0ebc56       39 seconds ago      Exited              etcd                      0                   9ca6adff980f6       etcd-newest-cni-205774                      kube-system
	8335f82f69329       c3fcf259c473a       39 seconds ago      Exited              kube-apiserver            0                   7b88ccbdbee86       kube-apiserver-newest-cni-205774            kube-system
	
	
	==> containerd <==
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.946056861Z" level=info msg="TearDown network for sandbox \"86e11d943bb50bb15cd1df1e8c4bd41f457c66fe83cf30fd66e9e8a16f68a025\" successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.946107503Z" level=info msg="StopPodSandbox for \"86e11d943bb50bb15cd1df1e8c4bd41f457c66fe83cf30fd66e9e8a16f68a025\" returns successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.952312637Z" level=info msg="StopPodSandbox for \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\""
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.952399103Z" level=info msg="Container to stop \"edaf51bc6e62468e9e5025f90cfebdb8b0e6ebc8b4c61655b8e1ff531c874edb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.953148263Z" level=info msg="TearDown network for sandbox \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\" successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.953209794Z" level=info msg="StopPodSandbox for \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\" returns successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.961153757Z" level=info msg="RunPodSandbox for name:\"kube-proxy-7rtd9\" uid:\"2e23f9e7-893d-4f50-adb4-3b60d08713e2\" namespace:\"kube-system\" attempt:1"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.967106564Z" level=info msg="RunPodSandbox for name:\"kindnet-25mjx\" uid:\"f3695d38-8b23-47ff-ad74-34792577851c\" namespace:\"kube-system\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.192734518Z" level=info msg="connecting to shim bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9" address="unix:///run/containerd/s/7d6612d44969d8bbd842b645bb7e695955fa246e6c4d6fa8a50b6f7eb3b7e934" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.200135133Z" level=info msg="connecting to shim f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16" address="unix:///run/containerd/s/0828850bb353edb798a55105691a37cdf2d5dc1e4222313daee04007003412e9" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.573393305Z" level=info msg="RunPodSandbox for name:\"kube-proxy-7rtd9\" uid:\"2e23f9e7-893d-4f50-adb4-3b60d08713e2\" namespace:\"kube-system\" attempt:1 returns sandbox id \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.577546589Z" level=info msg="CreateContainer within sandbox \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\" for container name:\"kube-proxy\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.598291756Z" level=info msg="Container aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.613696282Z" level=info msg="CreateContainer within sandbox \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\" for name:\"kube-proxy\" attempt:1 returns container id \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.614437524Z" level=info msg="StartContainer for \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.617175238Z" level=info msg="connecting to shim aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6" address="unix:///run/containerd/s/0828850bb353edb798a55105691a37cdf2d5dc1e4222313daee04007003412e9" protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.749444894Z" level=info msg="RunPodSandbox for name:\"kindnet-25mjx\" uid:\"f3695d38-8b23-47ff-ad74-34792577851c\" namespace:\"kube-system\" attempt:1 returns sandbox id \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.755897841Z" level=info msg="CreateContainer within sandbox \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\" for container name:\"kindnet-cni\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.793795177Z" level=info msg="Container 27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.822984071Z" level=info msg="CreateContainer within sandbox \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\" for name:\"kindnet-cni\" attempt:1 returns container id \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.823775339Z" level=info msg="StartContainer for \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.824881138Z" level=info msg="connecting to shim 27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1" address="unix:///run/containerd/s/7d6612d44969d8bbd842b645bb7e695955fa246e6c4d6fa8a50b6f7eb3b7e934" protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.960716200Z" level=info msg="StartContainer for \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\" returns successfully"
	Dec 28 07:22:32 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:32.556748121Z" level=info msg="StartContainer for \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\" returns successfully"
	Dec 28 07:22:37 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:37.656833362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               newest-cni-205774
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-205774
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-205774
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:22:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-205774
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-205774
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                c78b7a9c-baa5-412b-b391-8a01ce2381ac
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-205774                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-25mjx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-205774             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-205774    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-7rtd9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-205774             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-205774 event: Registered Node newest-cni-205774 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-205774 event: Registered Node newest-cni-205774 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:22:39 up  1:05,  0 user,  load average: 6.56, 3.38, 2.35
	Linux newest-cni-205774 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.778502    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d198bc7be88b5812b6ea474cca2ead9-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-205774\" (UID: \"6d198bc7be88b5812b6ea474cca2ead9\") " pod="kube-system/kube-apiserver-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.778658    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36bd75cc9069ca0bcb557db00e6a01fe-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-205774\" (UID: \"36bd75cc9069ca0bcb557db00e6a01fe\") " pod="kube-system/kube-controller-manager-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.778801    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d198bc7be88b5812b6ea474cca2ead9-ca-certs\") pod \"kube-apiserver-newest-cni-205774\" (UID: \"6d198bc7be88b5812b6ea474cca2ead9\") " pod="kube-system/kube-apiserver-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.787076    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-205774\" already exists" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.812193    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-205774\" already exists" pod="kube-system/kube-controller-manager-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.813569    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-205774\" already exists" pod="kube-system/kube-apiserver-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.813642    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-205774\" already exists" pod="kube-system/kube-scheduler-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.829857    1745 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.829966    1745 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.291603    1745 apiserver.go:52] "Watching apiserver"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.338550    1745 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.385110    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-xtables-lock\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.386879    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e23f9e7-893d-4f50-adb4-3b60d08713e2-xtables-lock\") pod \"kube-proxy-7rtd9\" (UID: \"2e23f9e7-893d-4f50-adb4-3b60d08713e2\") " pod="kube-system/kube-proxy-7rtd9"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.387000    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e23f9e7-893d-4f50-adb4-3b60d08713e2-lib-modules\") pod \"kube-proxy-7rtd9\" (UID: \"2e23f9e7-893d-4f50-adb4-3b60d08713e2\") " pod="kube-system/kube-proxy-7rtd9"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.389754    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-cni-cfg\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.389787    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-lib-modules\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.536010    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-205774" containerName="kube-controller-manager"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.536676    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-205774" containerName="kube-scheduler"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.537304    1745 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.537851    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-205774" containerName="kube-apiserver"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.555834    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-205774\" already exists" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.555954    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-205774" containerName="etcd"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.538464    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-205774" containerName="etcd"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.540701    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-205774" containerName="kube-apiserver"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.548090    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-205774" containerName="kube-scheduler"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-205774 -n newest-cni-205774
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-205774 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b: exit status 1 (149.926255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-b4wgx" not found
	Error from server (NotFound): pods "metrics-server-5d785b57d4-bhpwq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-whk7j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qn44b" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-205774
helpers_test.go:244: (dbg) docker inspect newest-cni-205774:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f",
	        "Created": "2025-12-28T07:21:46.788269683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246625,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:22:16.368529836Z",
	            "FinishedAt": "2025-12-28T07:22:15.202443176Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/hosts",
	        "LogPath": "/var/lib/docker/containers/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f/0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f-json.log",
	        "Name": "/newest-cni-205774",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-205774:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-205774",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0cfd8f8a662cea70e49c4fbe0cce21df6d20247eaa7c187c5eb2d6433d04484f",
	                "LowerDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649214b3cfbbf41ec265821088a3ef5547b7ecf9ba8af39936fe3797a42043c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-205774",
	                "Source": "/var/lib/docker/volumes/newest-cni-205774/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-205774",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-205774",
	                "name.minikube.sigs.k8s.io": "newest-cni-205774",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "921bfd8535a597c4e81c2a5a5dd6483553d5dd3ea5783701b22ccff1b139e862",
	            "SandboxKey": "/var/run/docker/netns/921bfd8535a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-205774": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:2f:f0:48:e2:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5461180d41c8930319db01bd3fd5c7f93122e8a433f052e8fbf9abaf649c9cf",
	                    "EndpointID": "d812ffd497758d8cf735b8ffd01bece7f2574fb1b1aa8a8e521877bc906a9cd4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-205774",
	                        "0cfd8f8a662c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-205774 -n newest-cni-205774
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-205774 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-468470 image list --format=json                                                                                                                                                                                                         │ embed-certs-468470                │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ pause   │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470                │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ unpause │ -p embed-certs-468470 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-468470                │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470                │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ delete  │ -p embed-certs-468470                                                                                                                                                                                                                               │ embed-certs-468470                │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:21 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:21 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ stop    │ -p newest-cni-205774 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-205774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0 │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ image   │ default-k8s-diff-port-450028 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-450028      │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028      │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p default-k8s-diff-port-450028 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-450028      │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ delete  │ -p default-k8s-diff-port-450028                                                                                                                                                                                                                     │ default-k8s-diff-port-450028      │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ delete  │ -p default-k8s-diff-port-450028                                                                                                                                                                                                                     │ default-k8s-diff-port-450028      │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p test-preload-dl-gcs-209495 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                        │ test-preload-dl-gcs-209495        │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-209495                                                                                                                                                                                                                       │ test-preload-dl-gcs-209495        │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p test-preload-dl-github-675713 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                  │ test-preload-dl-github-675713     │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ image   │ newest-cni-205774 image list --format=json                                                                                                                                                                                                          │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ pause   │ -p newest-cni-205774 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ unpause │ -p newest-cni-205774 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-205774                 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ delete  │ -p test-preload-dl-github-675713                                                                                                                                                                                                                    │ test-preload-dl-github-675713     │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-822090 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd                                                                 │ test-preload-dl-gcs-cached-822090 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-822090                                                                                                                                                                                                                │ test-preload-dl-gcs-cached-822090 │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │ 28 Dec 25 07:22 UTC │
	│ start   │ -p auto-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-742569                       │ jenkins │ v1.37.0 │ 28 Dec 25 07:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:22:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:22:40.968189  251403 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:22:40.968795  251403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:40.968858  251403 out.go:374] Setting ErrFile to fd 2...
	I1228 07:22:40.968878  251403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:22:40.969172  251403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:22:40.969647  251403 out.go:368] Setting JSON to false
	I1228 07:22:40.970577  251403 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3911,"bootTime":1766902650,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:22:40.970672  251403 start.go:143] virtualization:  
	I1228 07:22:40.975603  251403 out.go:179] * [auto-742569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:22:40.979186  251403 notify.go:221] Checking for updates...
	I1228 07:22:40.980510  251403 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:22:40.984426  251403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:22:40.987553  251403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:22:40.995041  251403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:22:40.998398  251403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:22:41.004797  251403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	27de065ebcaaa       c96ee3c174987       9 seconds ago       Running             kindnet-cni               1                   bcab27995f718       kindnet-25mjx                               kube-system
	aca3e1c12ea3f       de369f46c2ff5       10 seconds ago      Running             kube-proxy                1                   f348cc812939d       kube-proxy-7rtd9                            kube-system
	13e953a5db4e5       88898f1d1a62a       16 seconds ago      Running             kube-controller-manager   1                   9ad52d814ea6f       kube-controller-manager-newest-cni-205774   kube-system
	b03fce029c009       c3fcf259c473a       16 seconds ago      Running             kube-apiserver            1                   9921fbb661775       kube-apiserver-newest-cni-205774            kube-system
	38219e8929254       ddc8422d4d35a       17 seconds ago      Running             kube-scheduler            1                   f49469005aa67       kube-scheduler-newest-cni-205774            kube-system
	06f65bcae840d       271e49a0ebc56       17 seconds ago      Running             etcd                      1                   2b7d7917a2077       etcd-newest-cni-205774                      kube-system
	edaf51bc6e624       c96ee3c174987       27 seconds ago      Exited              kindnet-cni               0                   8fa961094d5de       kindnet-25mjx                               kube-system
	ef1607c6ac644       de369f46c2ff5       29 seconds ago      Exited              kube-proxy                0                   86e11d943bb50       kube-proxy-7rtd9                            kube-system
	4f42e043e48a9       88898f1d1a62a       41 seconds ago      Exited              kube-controller-manager   0                   be30139d61b80       kube-controller-manager-newest-cni-205774   kube-system
	7ce4a777ed519       ddc8422d4d35a       41 seconds ago      Exited              kube-scheduler            0                   cbe3f02a12a8c       kube-scheduler-newest-cni-205774            kube-system
	2d301be485e6e       271e49a0ebc56       41 seconds ago      Exited              etcd                      0                   9ca6adff980f6       etcd-newest-cni-205774                      kube-system
	8335f82f69329       c3fcf259c473a       41 seconds ago      Exited              kube-apiserver            0                   7b88ccbdbee86       kube-apiserver-newest-cni-205774            kube-system
	
	
	==> containerd <==
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.946056861Z" level=info msg="TearDown network for sandbox \"86e11d943bb50bb15cd1df1e8c4bd41f457c66fe83cf30fd66e9e8a16f68a025\" successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.946107503Z" level=info msg="StopPodSandbox for \"86e11d943bb50bb15cd1df1e8c4bd41f457c66fe83cf30fd66e9e8a16f68a025\" returns successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.952312637Z" level=info msg="StopPodSandbox for \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\""
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.952399103Z" level=info msg="Container to stop \"edaf51bc6e62468e9e5025f90cfebdb8b0e6ebc8b4c61655b8e1ff531c874edb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.953148263Z" level=info msg="TearDown network for sandbox \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\" successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.953209794Z" level=info msg="StopPodSandbox for \"8fa961094d5de9cba85b40e602924098af8ef54c071b0c5b909bf64a28a3275e\" returns successfully"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.961153757Z" level=info msg="RunPodSandbox for name:\"kube-proxy-7rtd9\" uid:\"2e23f9e7-893d-4f50-adb4-3b60d08713e2\" namespace:\"kube-system\" attempt:1"
	Dec 28 07:22:30 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:30.967106564Z" level=info msg="RunPodSandbox for name:\"kindnet-25mjx\" uid:\"f3695d38-8b23-47ff-ad74-34792577851c\" namespace:\"kube-system\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.192734518Z" level=info msg="connecting to shim bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9" address="unix:///run/containerd/s/7d6612d44969d8bbd842b645bb7e695955fa246e6c4d6fa8a50b6f7eb3b7e934" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.200135133Z" level=info msg="connecting to shim f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16" address="unix:///run/containerd/s/0828850bb353edb798a55105691a37cdf2d5dc1e4222313daee04007003412e9" namespace=k8s.io protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.573393305Z" level=info msg="RunPodSandbox for name:\"kube-proxy-7rtd9\" uid:\"2e23f9e7-893d-4f50-adb4-3b60d08713e2\" namespace:\"kube-system\" attempt:1 returns sandbox id \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.577546589Z" level=info msg="CreateContainer within sandbox \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\" for container name:\"kube-proxy\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.598291756Z" level=info msg="Container aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.613696282Z" level=info msg="CreateContainer within sandbox \"f348cc812939d23a06bc7bf68d0b7fe8b5fec84b4edb7149e26f06a37cb7fe16\" for name:\"kube-proxy\" attempt:1 returns container id \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.614437524Z" level=info msg="StartContainer for \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.617175238Z" level=info msg="connecting to shim aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6" address="unix:///run/containerd/s/0828850bb353edb798a55105691a37cdf2d5dc1e4222313daee04007003412e9" protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.749444894Z" level=info msg="RunPodSandbox for name:\"kindnet-25mjx\" uid:\"f3695d38-8b23-47ff-ad74-34792577851c\" namespace:\"kube-system\" attempt:1 returns sandbox id \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.755897841Z" level=info msg="CreateContainer within sandbox \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\" for container name:\"kindnet-cni\" attempt:1"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.793795177Z" level=info msg="Container 27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1: CDI devices from CRI Config.CDIDevices: []"
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.822984071Z" level=info msg="CreateContainer within sandbox \"bcab27995f71898e008184dceb436708c11d9c1d462359aea0b7e315b27dfec9\" for name:\"kindnet-cni\" attempt:1 returns container id \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.823775339Z" level=info msg="StartContainer for \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\""
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.824881138Z" level=info msg="connecting to shim 27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1" address="unix:///run/containerd/s/7d6612d44969d8bbd842b645bb7e695955fa246e6c4d6fa8a50b6f7eb3b7e934" protocol=ttrpc version=3
	Dec 28 07:22:31 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:31.960716200Z" level=info msg="StartContainer for \"aca3e1c12ea3f16968144883dc7f021a354f1f0e0007f15c277fa7abe19bbfb6\" returns successfully"
	Dec 28 07:22:32 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:32.556748121Z" level=info msg="StartContainer for \"27de065ebcaaa215f34371dc6d29b71b5a2cd3e6f8f88ae56f29af473c3793c1\" returns successfully"
	Dec 28 07:22:37 newest-cni-205774 containerd[553]: time="2025-12-28T07:22:37.656833362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               newest-cni-205774
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-205774
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-205774
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T07_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 07:22:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-205774
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 07:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 07:22:37 +0000   Sun, 28 Dec 2025 07:22:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-205774
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e85c53feee27c83956a2dd28695083ed
	  System UUID:                c78b7a9c-baa5-412b-b391-8a01ce2381ac
	  Boot ID:                    2c6eba19-e411-489f-8092-9dc4c0b1564e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.2.1
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-205774                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-25mjx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-205774             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-205774    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-7rtd9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-205774             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node newest-cni-205774 event: Registered Node newest-cni-205774 in Controller
	  Normal  RegisteredNode  8s    node-controller  Node newest-cni-205774 event: Registered Node newest-cni-205774 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.073877] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 07:22:41 up  1:05,  0 user,  load average: 6.56, 3.38, 2.35
	Linux newest-cni-205774 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.778658    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36bd75cc9069ca0bcb557db00e6a01fe-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-205774\" (UID: \"36bd75cc9069ca0bcb557db00e6a01fe\") " pod="kube-system/kube-controller-manager-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.778801    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d198bc7be88b5812b6ea474cca2ead9-ca-certs\") pod \"kube-apiserver-newest-cni-205774\" (UID: \"6d198bc7be88b5812b6ea474cca2ead9\") " pod="kube-system/kube-apiserver-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.787076    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-205774\" already exists" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.812193    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-205774\" already exists" pod="kube-system/kube-controller-manager-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.813569    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-205774\" already exists" pod="kube-system/kube-apiserver-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: E1228 07:22:37.813642    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-205774\" already exists" pod="kube-system/kube-scheduler-newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.829857    1745 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-205774"
	Dec 28 07:22:37 newest-cni-205774 kubelet[1745]: I1228 07:22:37.829966    1745 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.291603    1745 apiserver.go:52] "Watching apiserver"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.338550    1745 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.385110    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-xtables-lock\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.386879    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e23f9e7-893d-4f50-adb4-3b60d08713e2-xtables-lock\") pod \"kube-proxy-7rtd9\" (UID: \"2e23f9e7-893d-4f50-adb4-3b60d08713e2\") " pod="kube-system/kube-proxy-7rtd9"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.387000    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e23f9e7-893d-4f50-adb4-3b60d08713e2-lib-modules\") pod \"kube-proxy-7rtd9\" (UID: \"2e23f9e7-893d-4f50-adb4-3b60d08713e2\") " pod="kube-system/kube-proxy-7rtd9"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.389754    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-cni-cfg\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.389787    1745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3695d38-8b23-47ff-ad74-34792577851c-lib-modules\") pod \"kindnet-25mjx\" (UID: \"f3695d38-8b23-47ff-ad74-34792577851c\") " pod="kube-system/kindnet-25mjx"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.536010    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-205774" containerName="kube-controller-manager"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.536676    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-205774" containerName="kube-scheduler"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: I1228 07:22:38.537304    1745 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.537851    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-205774" containerName="kube-apiserver"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.555834    1745 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-205774\" already exists" pod="kube-system/etcd-newest-cni-205774"
	Dec 28 07:22:38 newest-cni-205774 kubelet[1745]: E1228 07:22:38.555954    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-205774" containerName="etcd"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.538464    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-205774" containerName="etcd"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.540701    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-205774" containerName="kube-apiserver"
	Dec 28 07:22:39 newest-cni-205774 kubelet[1745]: E1228 07:22:39.548090    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-205774" containerName="kube-scheduler"
	Dec 28 07:22:40 newest-cni-205774 kubelet[1745]: E1228 07:22:40.540627    1745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-205774" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-205774 -n newest-cni-205774
E1228 07:22:42.510779    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:22:42.515995    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:22:42.526223    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:22:42.546894    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-205774 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1228 07:22:42.587076    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:22:42.667547    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:281: non-running pods: coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b: exit status 1 (119.936403ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-b4wgx" not found
	Error from server (NotFound): pods "metrics-server-5d785b57d4-bhpwq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-whk7j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qn44b" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-205774 describe pod coredns-7d764666f9-b4wgx metrics-server-5d785b57d4-bhpwq storage-provisioner dashboard-metrics-scraper-867fb5f87b-whk7j kubernetes-dashboard-b84665fb8-qn44b: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.87s)
E1228 07:27:42.510595    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:10.199751    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.393112    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.398647    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.408979    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.429355    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.469631    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.550056    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:30.710491    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:31.031131    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:31.671966    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:32.952283    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:35.513414    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:37.452789    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:38.908391    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:38.913926    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:38.924268    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:38.944660    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:38.985081    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:39.065458    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:39.225876    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:39.546461    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:40.187386    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:40.633638    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:41.467584    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:44.029295    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (295/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.32
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.14
27 TestAddons/Setup 123.56
29 TestAddons/serial/Volcano 40.69
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.89
35 TestAddons/parallel/Registry 15.96
36 TestAddons/parallel/RegistryCreds 0.81
37 TestAddons/parallel/Ingress 17.97
38 TestAddons/parallel/InspektorGadget 10.73
39 TestAddons/parallel/MetricsServer 6.83
41 TestAddons/parallel/CSI 55.57
42 TestAddons/parallel/Headlamp 16.86
43 TestAddons/parallel/CloudSpanner 6.72
44 TestAddons/parallel/LocalPath 51.37
45 TestAddons/parallel/NvidiaDevicePlugin 5.53
46 TestAddons/parallel/Yakd 10.9
48 TestAddons/StoppedEnableDisable 12.3
49 TestCertOptions 27.96
50 TestCertExpiration 215
54 TestDockerEnvContainerd 42.83
58 TestErrorSpam/setup 23.66
59 TestErrorSpam/start 0.84
60 TestErrorSpam/status 1.2
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.49
63 TestErrorSpam/stop 1.66
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 46.8
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.97
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.65
75 TestFunctional/serial/CacheCmd/cache/add_local 1.21
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 37.62
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 0.8
86 TestFunctional/serial/LogsFileCmd 0.88
87 TestFunctional/serial/InvalidService 8.38
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 6.42
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.1
97 TestFunctional/parallel/ServiceCmdConnect 7.59
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 21.04
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.35
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.11
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.96
113 TestFunctional/parallel/License 0.42
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
129 TestFunctional/parallel/ServiceCmd/List 0.64
130 TestFunctional/parallel/MountCmd/any-port 8.67
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
133 TestFunctional/parallel/ServiceCmd/Format 0.52
134 TestFunctional/parallel/ServiceCmd/URL 0.48
135 TestFunctional/parallel/MountCmd/specific-port 2.08
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.28
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 1.36
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.86
144 TestFunctional/parallel/ImageCommands/Setup 0.67
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 124.59
163 TestMultiControlPlane/serial/DeployApp 7.09
164 TestMultiControlPlane/serial/PingHostFromPods 1.64
165 TestMultiControlPlane/serial/AddWorkerNode 30.74
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.46
169 TestMultiControlPlane/serial/StopSecondaryNode 12.87
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.46
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 95.88
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.4
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
176 TestMultiControlPlane/serial/StopCluster 36.18
177 TestMultiControlPlane/serial/RestartCluster 60.34
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
179 TestMultiControlPlane/serial/AddSecondaryNode 46.49
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 44.25
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.53
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.46
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.97
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 33.1
211 TestKicCustomNetwork/use_default_bridge_network 28.94
212 TestKicExistingNetwork 31.91
213 TestKicCustomSubnet 30.78
214 TestKicStaticIP 30.85
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 61.01
219 TestMountStart/serial/StartWithMountFirst 8.2
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.32
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.67
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.4
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 70.95
231 TestMultiNode/serial/DeployApp2Nodes 5.32
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 27.9
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 10.11
237 TestMultiNode/serial/StopNode 2.35
238 TestMultiNode/serial/StartAfterStop 7.75
239 TestMultiNode/serial/RestartKeepsNodes 78.92
240 TestMultiNode/serial/DeleteNode 5.55
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 50.09
243 TestMultiNode/serial/ValidateNameConflict 30.82
250 TestScheduledStopUnix 105.15
253 TestInsufficientStorage 12.72
254 TestRunningBinaryUpgrade 55.14
256 TestKubernetesUpgrade 333.94
257 TestMissingContainerUpgrade 124.1
259 TestPause/serial/Start 56.27
260 TestPause/serial/SecondStartNoReconfiguration 7.8
261 TestPause/serial/Pause 0.54
263 TestStoppedBinaryUpgrade/Setup 0.86
264 TestStoppedBinaryUpgrade/Upgrade 311.65
265 TestStoppedBinaryUpgrade/MinikubeLogs 2
273 TestPreload/Start-NoPreload-PullImage 67.12
274 TestPreload/Restart-With-Preload-Check-User-Image 48.9
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
278 TestNoKubernetes/serial/StartWithK8s 27.85
279 TestNoKubernetes/serial/StartWithStopK8s 5.99
280 TestNoKubernetes/serial/Start 7.61
281 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
283 TestNoKubernetes/serial/ProfileList 1.03
284 TestNoKubernetes/serial/Stop 1.3
285 TestNoKubernetes/serial/StartNoArgs 6.38
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
294 TestNetworkPlugins/group/false 3.93
299 TestStartStop/group/old-k8s-version/serial/FirstStart 59.58
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
302 TestStartStop/group/old-k8s-version/serial/Stop 11.99
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/old-k8s-version/serial/SecondStart 48.12
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
310 TestStartStop/group/no-preload/serial/FirstStart 51.83
311 TestStartStop/group/no-preload/serial/DeployApp 8.33
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
313 TestStartStop/group/no-preload/serial/Stop 12.13
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 50.9
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/embed-certs/serial/FirstStart 44.95
322 TestStartStop/group/embed-certs/serial/DeployApp 10.45
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.25
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
326 TestStartStop/group/embed-certs/serial/Stop 12.28
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
328 TestStartStop/group/embed-certs/serial/SecondStart 51.23
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.72
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.2
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
339 TestStartStop/group/newest-cni/serial/FirstStart 31.68
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.2
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
344 TestStartStop/group/newest-cni/serial/Stop 1.43
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
346 TestStartStop/group/newest-cni/serial/SecondStart 18.79
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
349 TestPreload/PreloadSrc/gcs 5.61
350 TestPreload/PreloadSrc/github 6.68
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
355 TestPreload/PreloadSrc/gcs-cached 0.68
356 TestNetworkPlugins/group/auto/Start 48.88
357 TestNetworkPlugins/group/kindnet/Start 51.54
358 TestNetworkPlugins/group/auto/KubeletFlags 0.32
359 TestNetworkPlugins/group/auto/NetCatPod 10.32
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/auto/DNS 0.18
362 TestNetworkPlugins/group/auto/Localhost 0.16
363 TestNetworkPlugins/group/auto/HairPin 0.17
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
366 TestNetworkPlugins/group/kindnet/DNS 0.3
367 TestNetworkPlugins/group/kindnet/Localhost 0.23
368 TestNetworkPlugins/group/kindnet/HairPin 0.23
369 TestNetworkPlugins/group/calico/Start 63.08
370 TestNetworkPlugins/group/custom-flannel/Start 55.47
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.29
373 TestNetworkPlugins/group/calico/NetCatPod 9.32
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.38
376 TestNetworkPlugins/group/calico/DNS 0.25
377 TestNetworkPlugins/group/calico/Localhost 0.16
378 TestNetworkPlugins/group/calico/HairPin 0.16
379 TestNetworkPlugins/group/custom-flannel/DNS 0.28
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
382 TestNetworkPlugins/group/enable-default-cni/Start 70.32
383 TestNetworkPlugins/group/flannel/Start 55.67
384 TestNetworkPlugins/group/flannel/ControllerPod 6
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
386 TestNetworkPlugins/group/flannel/NetCatPod 9.29
387 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
388 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.28
389 TestNetworkPlugins/group/flannel/DNS 0.16
390 TestNetworkPlugins/group/flannel/Localhost 0.15
391 TestNetworkPlugins/group/flannel/HairPin 0.16
392 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
393 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
394 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
395 TestNetworkPlugins/group/bridge/Start 75.18
396 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
397 TestNetworkPlugins/group/bridge/NetCatPod 8.26
398 TestNetworkPlugins/group/bridge/DNS 0.17
399 TestNetworkPlugins/group/bridge/Localhost 0.16
400 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (6.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-427775 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-427775 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.257004883s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1228 06:28:03.644119    4195 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1228 06:28:03.644194    4195 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-427775
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-427775: exit status 85 (87.746129ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-427775 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-427775 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:27:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:27:57.429951    4201 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:27:57.430147    4201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:57.430180    4201 out.go:374] Setting ErrFile to fd 2...
	I1228 06:27:57.430201    4201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:57.430996    4201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	W1228 06:27:57.431181    4201 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22352-2380/.minikube/config/config.json: open /home/jenkins/minikube-integration/22352-2380/.minikube/config/config.json: no such file or directory
	I1228 06:27:57.432310    4201 out.go:368] Setting JSON to true
	I1228 06:27:57.433112    4201 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":627,"bootTime":1766902650,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 06:27:57.433216    4201 start.go:143] virtualization:  
	I1228 06:27:57.438969    4201 out.go:99] [download-only-427775] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1228 06:27:57.439188    4201 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball: no such file or directory
	I1228 06:27:57.439325    4201 notify.go:221] Checking for updates...
	I1228 06:27:57.443608    4201 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:27:57.447087    4201 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:27:57.450402    4201 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:27:57.453741    4201 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 06:27:57.457026    4201 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1228 06:27:57.462931    4201 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:27:57.463169    4201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:27:57.490428    4201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:27:57.490531    4201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:57.893179    4201 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-28 06:27:57.880770857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:27:57.893285    4201 docker.go:319] overlay module found
	I1228 06:27:57.896268    4201 out.go:99] Using the docker driver based on user configuration
	I1228 06:27:57.896310    4201 start.go:309] selected driver: docker
	I1228 06:27:57.896318    4201 start.go:928] validating driver "docker" against <nil>
	I1228 06:27:57.896410    4201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:57.951814    4201 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-28 06:27:57.943313467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:27:57.951969    4201 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:27:57.952251    4201 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1228 06:27:57.952422    4201 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:27:57.955694    4201 out.go:171] Using Docker driver with root privileges
	I1228 06:27:57.958721    4201 cni.go:84] Creating CNI manager for ""
	I1228 06:27:57.958792    4201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1228 06:27:57.958805    4201 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:27:57.958893    4201 start.go:353] cluster config:
	{Name:download-only-427775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-427775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:27:57.961932    4201 out.go:99] Starting "download-only-427775" primary control-plane node in "download-only-427775" cluster
	I1228 06:27:57.961959    4201 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1228 06:27:57.964986    4201 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:27:57.965032    4201 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 06:27:57.965176    4201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:27:57.981450    4201 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:27:57.981633    4201 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:27:57.981741    4201 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:27:58.022708    4201 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1228 06:27:58.022744    4201 cache.go:65] Caching tarball of preloaded images
	I1228 06:27:58.022922    4201 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 06:27:58.026234    4201 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1228 06:27:58.026259    4201 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1228 06:27:58.026267    4201 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1228 06:27:58.106779    4201 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1228 06:27:58.106922    4201 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1228 06:28:01.337968    4201 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1228 06:28:01.338529    4201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/download-only-427775/config.json ...
	I1228 06:28:01.338604    4201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/download-only-427775/config.json: {Name:mk7428b81627a261def973b6d146402609cfd24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:01.338805    4201 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1228 06:28:01.339049    4201 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22352-2380/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-427775 host does not exist
	  To start a cluster, run: "minikube start -p download-only-427775"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-427775
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-977641 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-977641 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.316115947s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1228 06:28:07.394493    4195 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 06:28:07.394533    4195 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-977641
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-977641: exit status 85 (92.151792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-427775 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-427775 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ delete  │ -p download-only-427775                                                                                                                                                               │ download-only-427775 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ start   │ -o=json --download-only -p download-only-977641 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-977641 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:04.120794    4403 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:28:04.121037    4403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:04.121065    4403 out.go:374] Setting ErrFile to fd 2...
	I1228 06:28:04.121082    4403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:04.121375    4403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:28:04.121819    4403 out.go:368] Setting JSON to true
	I1228 06:28:04.122572    4403 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":634,"bootTime":1766902650,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 06:28:04.122659    4403 start.go:143] virtualization:  
	I1228 06:28:04.126395    4403 out.go:99] [download-only-977641] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 06:28:04.126598    4403 notify.go:221] Checking for updates...
	I1228 06:28:04.129519    4403 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:04.132579    4403 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:28:04.135556    4403 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:28:04.138535    4403 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 06:28:04.141575    4403 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1228 06:28:04.147355    4403 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:04.147617    4403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:04.170145    4403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:28:04.170249    4403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:04.230368    4403 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-28 06:28:04.221281957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:28:04.230475    4403 docker.go:319] overlay module found
	I1228 06:28:04.233413    4403 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:04.233447    4403 start.go:309] selected driver: docker
	I1228 06:28:04.233454    4403 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:04.233553    4403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:04.292149    4403 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-28 06:28:04.282960648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:28:04.292307    4403 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:04.292658    4403 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1228 06:28:04.292814    4403 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:04.295938    4403 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-977641 host does not exist
	  To start a cluster, run: "minikube start -p download-only-977641"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-977641
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1228 06:28:08.528980    4195 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-555662 --alsologtostderr --binary-mirror http://127.0.0.1:34149 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-555662" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-555662
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-092445
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-092445: exit status 85 (164.396326ms)

                                                
                                                
-- stdout --
	* Profile "addons-092445" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-092445"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.14s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-092445
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-092445: exit status 85 (136.029451ms)

                                                
                                                
-- stdout --
	* Profile "addons-092445" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-092445"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.14s)

                                                
                                    
x
+
TestAddons/Setup (123.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-092445 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-092445 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.558823283s)
--- PASS: TestAddons/Setup (123.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 41.528891ms
addons_test.go:886: volcano-controller stabilized in 41.956398ms
addons_test.go:878: volcano-admission stabilized in 42.044948ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-rkpjv" [85ef1abf-fe07-48d9-a78b-066a2e936d70] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003810123s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-mxxl7" [1c9021f7-3df7-48f9-8504-af6da101b1e1] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003398658s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-2tgh5" [6f21d209-2b74-4c28-b6b2-0b1ff303e4f3] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004380148s
addons_test.go:905: (dbg) Run:  kubectl --context addons-092445 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-092445 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-092445 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [854c5dc4-f59a-4d6a-bb8d-bc533a0ce4c8] Pending
helpers_test.go:353: "test-job-nginx-0" [854c5dc4-f59a-4d6a-bb8d-bc533a0ce4c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [854c5dc4-f59a-4d6a-bb8d-bc533a0ce4c8] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.005568581s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable volcano --alsologtostderr -v=1: (12.092595417s)
--- PASS: TestAddons/serial/Volcano (40.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-092445 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-092445 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-092445 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-092445 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dc9e0f63-3c33-4e8f-a672-f4aac0104e0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dc9e0f63-3c33-4e8f-a672-f4aac0104e0e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003559803s
addons_test.go:696: (dbg) Run:  kubectl --context addons-092445 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-092445 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-092445 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-092445 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.847574ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-7ddh5" [09fe96fa-54db-4786-b147-f632786c4222] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003998708s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-4w295" [f762e63a-6b1f-46cf-b87c-3cadf01ba285] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003560649s
addons_test.go:394: (dbg) Run:  kubectl --context addons-092445 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-092445 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-092445 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.924129024s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 ip
2025/12/28 06:31:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.96s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.655796ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-092445
addons_test.go:334: (dbg) Run:  kubectl --context addons-092445 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-092445 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-092445 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-092445 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [812cc370-9cb6-4dd8-acad-3b1579ee3516] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [812cc370-9cb6-4dd8-acad-3b1579ee3516] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.004004982s
I1228 06:31:54.931734    4195 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-092445 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable ingress-dns --alsologtostderr -v=1: (1.318669419s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable ingress --alsologtostderr -v=1: (7.903027393s)
--- PASS: TestAddons/parallel/Ingress (17.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-gb22l" [b3c9be94-fc86-42e3-aec1-2568718a0a02] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005333368s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable inspektor-gadget --alsologtostderr -v=1: (5.718755096s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.12973ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-2hv4h" [40259687-946a-4ce1-a067-d80a00d335ae] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003845892s
addons_test.go:465: (dbg) Run:  kubectl --context addons-092445 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1228 06:31:28.864388    4195 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1228 06:31:28.869177    4195 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1228 06:31:28.869220    4195 kapi.go:107] duration metric: took 8.374718ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.387715ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-092445 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-092445 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [76f9d4d9-d634-471b-a67f-b28072fc90e3] Pending
helpers_test.go:353: "task-pv-pod" [76f9d4d9-d634-471b-a67f-b28072fc90e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [76f9d4d9-d634-471b-a67f-b28072fc90e3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003901s
addons_test.go:574: (dbg) Run:  kubectl --context addons-092445 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-092445 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-092445 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-092445 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-092445 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-092445 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-092445 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [473b4589-61ee-4ed6-ba0e-59744cb045dd] Pending
helpers_test.go:353: "task-pv-pod-restore" [473b4589-61ee-4ed6-ba0e-59744cb045dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [473b4589-61ee-4ed6-ba0e-59744cb045dd] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003129199s
addons_test.go:616: (dbg) Run:  kubectl --context addons-092445 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-092445 delete pod task-pv-pod-restore: (1.20182759s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-092445 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-092445 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.925523556s)
--- PASS: TestAddons/parallel/CSI (55.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-092445 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-092445 --alsologtostderr -v=1: (1.002393336s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-5zslf" [0dda18e6-d402-4ce1-8d51-5ec8bf824cf7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-5zslf" [0dda18e6-d402-4ce1-8d51-5ec8bf824cf7] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004007243s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable headlamp --alsologtostderr -v=1: (5.85539965s)
--- PASS: TestAddons/parallel/Headlamp (16.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-mbblm" [402ae65c-c608-4082-90d0-bc900346759f] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003144334s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-092445 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-092445 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [abaec586-bc54-4bb6-8de5-013f2ce2f27b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [abaec586-bc54-4bb6-8de5-013f2ce2f27b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [abaec586-bc54-4bb6-8de5-013f2ce2f27b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003341128s
addons_test.go:969: (dbg) Run:  kubectl --context addons-092445 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 ssh "cat /opt/local-path-provisioner/pvc-69e4ad33-d98c-4c41-bbbc-f41e7fdb2d27_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-092445 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-092445 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.193516012s)
--- PASS: TestAddons/parallel/LocalPath (51.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-mrldq" [4228cf3e-067f-48e8-8695-61f76228b35b] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003004583s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-2d775" [5d69743a-f5ca-4ad4-a259-7de74868b5d6] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008807458s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-092445 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-092445 addons disable yakd --alsologtostderr -v=1: (5.890708329s)
--- PASS: TestAddons/parallel/Yakd (10.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-092445
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-092445: (12.012984339s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-092445
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-092445
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-092445
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (27.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-913529 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.747130802s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-913529 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-913529 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-913529 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-913529" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-913529
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-913529: (2.481177579s)
--- PASS: TestCertOptions (27.96s)

                                                
                                    
x
+
TestCertExpiration (215s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-478620 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1228 07:08:16.079904    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-478620 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.651838791s)
E1228 07:10:13.021295    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-478620 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-478620 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.011155254s)
helpers_test.go:176: Cleaning up "cert-expiration-478620" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-478620
E1228 07:11:40.815199    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-478620: (2.341238913s)
--- PASS: TestCertExpiration (215.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.83s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-667730 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-667730 --driver=docker  --container-runtime=containerd: (27.572537142s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-667730"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-667730": (1.052690939s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MzyyRFNYBF0Q/agent.23813" SSH_AGENT_PID="23814" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MzyyRFNYBF0Q/agent.23813" SSH_AGENT_PID="23814" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MzyyRFNYBF0Q/agent.23813" SSH_AGENT_PID="23814" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.272742213s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MzyyRFNYBF0Q/agent.23813" SSH_AGENT_PID="23814" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-667730" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-667730
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-667730: (1.982256099s)
--- PASS: TestDockerEnvContainerd (42.83s)

                                                
                                    
x
+
TestErrorSpam/setup (23.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-071774 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-071774 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-071774 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-071774 --driver=docker  --container-runtime=containerd: (23.658892587s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (23.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 stop: (1.424172632s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-071774 --log_dir /tmp/nospam-071774 stop
--- PASS: TestErrorSpam/stop (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/test/nested/copy/4195/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1228 06:35:13.029496    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.035554    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.045847    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.066323    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.106564    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.186768    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.347208    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:13.667743    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:14.308705    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:15.589681    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:18.149931    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:23.271075    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:33.512264    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-243289 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (46.802661098s)
--- PASS: TestFunctional/serial/StartWithProxy (46.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1228 06:35:36.578808    4195 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-243289 --alsologtostderr -v=8: (6.96514753s)
functional_test.go:678: soft start took 6.96812369s for "functional-243289" cluster.
I1228 06:35:43.544278    4195 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-243289 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:3.1: (1.41738005s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:3.3: (1.182218539s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 cache add registry.k8s.io/pause:latest: (1.054696587s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-243289 /tmp/TestFunctionalserialCacheCmdcacheadd_local2242369987/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache add minikube-local-cache-test:functional-243289
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache delete minikube-local-cache-test:functional-243289
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-243289
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.956836ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 kubectl -- --context functional-243289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-243289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1228 06:35:53.992623    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-243289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.61883891s)
functional_test.go:776: restart took 37.618945758s for "functional-243289" cluster.
I1228 06:36:28.862475    4195 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (37.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-243289 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 logs
--- PASS: TestFunctional/serial/LogsCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 logs --file /tmp/TestFunctionalserialLogsFileCmd2457546380/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (8.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-243289 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-243289
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-243289: exit status 115 (386.313001ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30227 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-243289 delete -f testdata/invalidsvc.yaml
E1228 06:36:34.952806    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2337: (dbg) Done: kubectl --context functional-243289 delete -f testdata/invalidsvc.yaml: (4.754328978s)
--- PASS: TestFunctional/serial/InvalidService (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 config get cpus: exit status 14 (79.905588ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 config get cpus: exit status 14 (63.572167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-243289 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-243289 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 37683: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-243289 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (185.77434ms)

                                                
                                                
-- stdout --
	* [functional-243289] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:37:07.413073   37385 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:37:07.413183   37385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:07.413191   37385 out.go:374] Setting ErrFile to fd 2...
	I1228 06:37:07.413196   37385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:07.413451   37385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:37:07.413798   37385 out.go:368] Setting JSON to false
	I1228 06:37:07.414642   37385 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1177,"bootTime":1766902650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 06:37:07.414761   37385 start.go:143] virtualization:  
	I1228 06:37:07.418036   37385 out.go:179] * [functional-243289] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 06:37:07.421950   37385 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:37:07.422016   37385 notify.go:221] Checking for updates...
	I1228 06:37:07.427930   37385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:37:07.430673   37385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:37:07.433457   37385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 06:37:07.436389   37385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 06:37:07.439436   37385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:37:07.442734   37385 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:37:07.443285   37385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:37:07.472680   37385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:37:07.472797   37385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:37:07.533037   37385 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:37:07.523788529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:37:07.533140   37385 docker.go:319] overlay module found
	I1228 06:37:07.536556   37385 out.go:179] * Using the docker driver based on existing profile
	I1228 06:37:07.539350   37385 start.go:309] selected driver: docker
	I1228 06:37:07.539370   37385 start.go:928] validating driver "docker" against &{Name:functional-243289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-243289 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:37:07.539480   37385 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:37:07.542985   37385 out.go:203] 
	W1228 06:37:07.545913   37385 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1228 06:37:07.548776   37385 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-243289 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-243289 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (209.684258ms)

                                                
                                                
-- stdout --
	* [functional-243289] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:37:07.211758   37337 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:37:07.212586   37337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:07.212633   37337 out.go:374] Setting ErrFile to fd 2...
	I1228 06:37:07.212669   37337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:07.213795   37337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:37:07.215328   37337 out.go:368] Setting JSON to false
	I1228 06:37:07.216541   37337 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1177,"bootTime":1766902650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 06:37:07.216667   37337 start.go:143] virtualization:  
	I1228 06:37:07.220995   37337 out.go:179] * [functional-243289] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1228 06:37:07.224517   37337 notify.go:221] Checking for updates...
	I1228 06:37:07.227970   37337 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:37:07.231102   37337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:37:07.233986   37337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 06:37:07.236936   37337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 06:37:07.242553   37337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 06:37:07.246346   37337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:37:07.249688   37337 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:37:07.250244   37337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:37:07.284581   37337 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:37:07.284684   37337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:37:07.347670   37337 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:37:07.337851635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:37:07.347774   37337 docker.go:319] overlay module found
	I1228 06:37:07.350914   37337 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1228 06:37:07.353740   37337 start.go:309] selected driver: docker
	I1228 06:37:07.353762   37337 start.go:928] validating driver "docker" against &{Name:functional-243289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-243289 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:37:07.353859   37337 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:37:07.357236   37337 out.go:203] 
	W1228 06:37:07.359977   37337 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1228 06:37:07.362692   37337 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-243289 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-243289 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-gxf9x" [322ee2e8-9325-4835-94ef-bc0e8d47c09c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-gxf9x" [322ee2e8-9325-4835-94ef-bc0e8d47c09c] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003735265s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:32017
functional_test.go:1685: http://192.168.49.2:32017: success! body:
Request served by hello-node-connect-5d95464fd4-gxf9x

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32017
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [ea735221-78d0-4293-a3b2-68618f9ba4b6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004161353s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-243289 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-243289 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-243289 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-243289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7c74ea17-0814-443e-a3a5-c1da32d9b3c1] Pending
helpers_test.go:353: "sp-pod" [7c74ea17-0814-443e-a3a5-c1da32d9b3c1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7c74ea17-0814-443e-a3a5-c1da32d9b3c1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003628897s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-243289 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-243289 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-243289 delete -f testdata/storage-provisioner/pod.yaml: (1.024007684s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-243289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4d976d6e-f9ea-444a-9c98-9a96e6ef6095] Pending
helpers_test.go:353: "sp-pod" [4d976d6e-f9ea-444a-9c98-9a96e6ef6095] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003865678s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-243289 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh -n functional-243289 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cp functional-243289:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1541282722/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh -n functional-243289 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh -n functional-243289 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4195/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /etc/test/nested/copy/4195/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4195.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /etc/ssl/certs/4195.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4195.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /usr/share/ca-certificates/4195.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/41952.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /etc/ssl/certs/41952.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/41952.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /usr/share/ca-certificates/41952.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-243289 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "sudo systemctl is-active docker": exit status 1 (679.870438ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "sudo systemctl is-active crio": exit status 1 (281.459253ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 34748: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-243289 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a5dd741d-a801-4dc3-9204-5c1f088dfafb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a5dd741d-a801-4dc3-9204-5c1f088dfafb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003610449s
I1228 06:36:49.219157    4195 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-243289 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.73.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-243289 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-243289 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-243289 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-8whsz" [a01713a1-078b-4141-a523-51d1f3452073] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-8whsz" [a01713a1-078b-4141-a523-51d1f3452073] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004957156s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "363.904778ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "60.585569ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "437.682811ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "78.626943ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdany-port1634138204/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766903823774422247" to /tmp/TestFunctionalparallelMountCmdany-port1634138204/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766903823774422247" to /tmp/TestFunctionalparallelMountCmdany-port1634138204/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766903823774422247" to /tmp/TestFunctionalparallelMountCmdany-port1634138204/001/test-1766903823774422247
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (457.630197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:37:04.233097    4195 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 28 06:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 28 06:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 28 06:37 test-1766903823774422247
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh cat /mount-9p/test-1766903823774422247
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-243289 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [dbbd8936-d3fe-4419-9fbb-8f8bc34ece2e] Pending
helpers_test.go:353: "busybox-mount" [dbbd8936-d3fe-4419-9fbb-8f8bc34ece2e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [dbbd8936-d3fe-4419-9fbb-8f8bc34ece2e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [dbbd8936-d3fe-4419-9fbb-8f8bc34ece2e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004193096s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-243289 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdany-port1634138204/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service list -o json
functional_test.go:1509: Took "636.999186ms" to run "out/minikube-linux-arm64 -p functional-243289 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31290
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31290
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdspecific-port3676848561/001:/mount-9p --alsologtostderr -v=1 --port 36695]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.279466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:37:12.931237    4195 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdspecific-port3676848561/001:/mount-9p --alsologtostderr -v=1 --port 36695] ...
2025/12/28 06:37:13 [DEBUG] GET http://127.0.0.1:39939/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "sudo umount -f /mount-9p": exit status 1 (436.258791ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-243289 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdspecific-port3676848561/001:/mount-9p --alsologtostderr -v=1 --port 36695] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T" /mount1: exit status 1 (696.198678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-243289 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-243289 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1193680372/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 version -o=json --components: (1.363998211s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-243289 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-243289
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-243289 image ls --format short --alsologtostderr:
I1228 06:37:22.486665   40452 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:22.486922   40452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.486931   40452 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:22.486937   40452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.487342   40452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 06:37:22.488183   40452 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.488314   40452 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.488812   40452 cli_runner.go:164] Run: docker container inspect functional-243289 --format={{.State.Status}}
I1228 06:37:22.505622   40452 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:22.505689   40452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-243289
I1228 06:37:22.525403   40452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/functional-243289/id_rsa Username:docker}
I1228 06:37:22.626978   40452 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-243289 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-243289                     │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test       │ functional-243289                     │ sha256:848fbc │ 991B   │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-243289 image ls --format table --alsologtostderr:
I1228 06:37:22.740350   40530 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:22.740492   40530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.740504   40530 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:22.740510   40530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.740760   40530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 06:37:22.741347   40530 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.741452   40530 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.741955   40530 cli_runner.go:164] Run: docker container inspect functional-243289 --format={{.State.Status}}
I1228 06:37:22.761311   40530 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:22.761362   40530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-243289
I1228 06:37:22.781025   40530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/functional-243289/id_rsa Username:docker}
I1228 06:37:22.879944   40530 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-243289 image ls --format json --alsologtostderr:
[{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d6
5b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":
"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289"],"size":"2173567"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scr
aper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22987510"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e
8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:848fbc170fe1bda205950e8718a6c8d27f56db9a4b87e97a470144d3494fa0ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-243289"],"size":"991"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-243289 image ls --format json --alsologtostderr:
I1228 06:37:22.733189   40523 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:22.733394   40523 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.733420   40523 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:22.733439   40523 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.737764   40523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 06:37:22.738493   40523 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.739400   40523 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.740007   40523 cli_runner.go:164] Run: docker container inspect functional-243289 --format={{.State.Status}}
I1228 06:37:22.759238   40523 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:22.759311   40523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-243289
I1228 06:37:22.778128   40523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/functional-243289/id_rsa Username:docker}
I1228 06:37:22.875955   40523 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-243289 image ls --format yaml --alsologtostderr:
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:848fbc170fe1bda205950e8718a6c8d27f56db9a4b87e97a470144d3494fa0ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-243289
size: "991"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
size: "2173567"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-243289 image ls --format yaml --alsologtostderr:
I1228 06:37:22.488558   40453 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:22.488694   40453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.488749   40453 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:22.488774   40453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:22.489108   40453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 06:37:22.489734   40453 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.489911   40453 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:22.490499   40453 cli_runner.go:164] Run: docker container inspect functional-243289 --format={{.State.Status}}
I1228 06:37:22.508206   40453 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:22.508255   40453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-243289
I1228 06:37:22.534114   40453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/functional-243289/id_rsa Username:docker}
I1228 06:37:22.637525   40453 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-243289 ssh pgrep buildkitd: exit status 1 (278.958243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image build -t localhost/my-image:functional-243289 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 image build -t localhost/my-image:functional-243289 testdata/build --alsologtostderr: (3.35632154s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-243289 image build -t localhost/my-image:functional-243289 testdata/build --alsologtostderr:
I1228 06:37:23.248823   40664 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:23.249051   40664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:23.249081   40664 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:23.249100   40664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:23.249380   40664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 06:37:23.250027   40664 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:23.252620   40664 config.go:182] Loaded profile config "functional-243289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 06:37:23.253182   40664 cli_runner.go:164] Run: docker container inspect functional-243289 --format={{.State.Status}}
I1228 06:37:23.269568   40664 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:23.269627   40664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-243289
I1228 06:37:23.285810   40664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/functional-243289/id_rsa Username:docker}
I1228 06:37:23.382890   40664 build_images.go:162] Building image from path: /tmp/build.909572136.tar
I1228 06:37:23.382958   40664 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1228 06:37:23.391161   40664 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.909572136.tar
I1228 06:37:23.394892   40664 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.909572136.tar: stat -c "%s %y" /var/lib/minikube/build/build.909572136.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.909572136.tar': No such file or directory
I1228 06:37:23.394920   40664 ssh_runner.go:362] scp /tmp/build.909572136.tar --> /var/lib/minikube/build/build.909572136.tar (3072 bytes)
I1228 06:37:23.412293   40664 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.909572136
I1228 06:37:23.420319   40664 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.909572136 -xf /var/lib/minikube/build/build.909572136.tar
I1228 06:37:23.428764   40664 containerd.go:402] Building image: /var/lib/minikube/build/build.909572136
I1228 06:37:23.428834   40664 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.909572136 --local dockerfile=/var/lib/minikube/build/build.909572136 --output type=image,name=localhost/my-image:functional-243289
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a38da1b531817b740877b47da9c7995bb9da9d12547fd8ef9896a7c6e9edffba 0.0s done
#8 exporting config sha256:447cdd044e7d49fa12e6a80abf992098a1724c2970e26fff5db3bb2a9562dd31 0.0s done
#8 naming to localhost/my-image:functional-243289 done
#8 DONE 0.2s
I1228 06:37:26.530319   40664 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.909572136 --local dockerfile=/var/lib/minikube/build/build.909572136 --output type=image,name=localhost/my-image:functional-243289: (3.101454809s)
I1228 06:37:26.530403   40664 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.909572136
I1228 06:37:26.539252   40664 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.909572136.tar
I1228 06:37:26.548047   40664 build_images.go:218] Built localhost/my-image:functional-243289 from /tmp/build.909572136.tar
I1228 06:37:26.548078   40664 build_images.go:134] succeeded building to: functional-243289
I1228 06:37:26.548084   40664 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr: (1.065488464s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-243289 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr: (1.046818721s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-243289 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-243289
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-243289
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-243289
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (124.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1228 06:37:56.873645    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m3.689333891s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (124.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 kubectl -- rollout status deployment/busybox: (4.272278774s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-qgcfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-s7bbs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-zj4vq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-qgcfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-s7bbs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-zj4vq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-qgcfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-s7bbs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-zj4vq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-qgcfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-qgcfl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-s7bbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-s7bbs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-zj4vq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 kubectl -- exec busybox-769dd8b7dd-zj4vq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 node add --alsologtostderr -v 5: (29.677365571s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
E1228 06:40:13.022124    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5: (1.05911672s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-401098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.081016979s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 status --output json --alsologtostderr -v 5: (1.037096717s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp testdata/cp-test.txt ha-401098:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1574922971/001/cp-test_ha-401098.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098:/home/docker/cp-test.txt ha-401098-m02:/home/docker/cp-test_ha-401098_ha-401098-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test_ha-401098_ha-401098-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098:/home/docker/cp-test.txt ha-401098-m03:/home/docker/cp-test_ha-401098_ha-401098-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test_ha-401098_ha-401098-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098:/home/docker/cp-test.txt ha-401098-m04:/home/docker/cp-test_ha-401098_ha-401098-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test_ha-401098_ha-401098-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp testdata/cp-test.txt ha-401098-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1574922971/001/cp-test_ha-401098-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m02:/home/docker/cp-test.txt ha-401098:/home/docker/cp-test_ha-401098-m02_ha-401098.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test_ha-401098-m02_ha-401098.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m02:/home/docker/cp-test.txt ha-401098-m03:/home/docker/cp-test_ha-401098-m02_ha-401098-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test_ha-401098-m02_ha-401098-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m02:/home/docker/cp-test.txt ha-401098-m04:/home/docker/cp-test_ha-401098-m02_ha-401098-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test_ha-401098-m02_ha-401098-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp testdata/cp-test.txt ha-401098-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1574922971/001/cp-test_ha-401098-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m03:/home/docker/cp-test.txt ha-401098:/home/docker/cp-test_ha-401098-m03_ha-401098.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test_ha-401098-m03_ha-401098.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m03:/home/docker/cp-test.txt ha-401098-m02:/home/docker/cp-test_ha-401098-m03_ha-401098-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test_ha-401098-m03_ha-401098-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m03:/home/docker/cp-test.txt ha-401098-m04:/home/docker/cp-test_ha-401098-m03_ha-401098-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test_ha-401098-m03_ha-401098-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp testdata/cp-test.txt ha-401098-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1574922971/001/cp-test_ha-401098-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m04:/home/docker/cp-test.txt ha-401098:/home/docker/cp-test_ha-401098-m04_ha-401098.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098 "sudo cat /home/docker/cp-test_ha-401098-m04_ha-401098.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m04:/home/docker/cp-test.txt ha-401098-m02:/home/docker/cp-test_ha-401098-m04_ha-401098-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m02 "sudo cat /home/docker/cp-test_ha-401098-m04_ha-401098-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 cp ha-401098-m04:/home/docker/cp-test.txt ha-401098-m03:/home/docker/cp-test_ha-401098-m04_ha-401098-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 ssh -n ha-401098-m03 "sudo cat /home/docker/cp-test_ha-401098-m04_ha-401098-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node stop m02 --alsologtostderr -v 5
E1228 06:40:40.717941    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 node stop m02 --alsologtostderr -v 5: (12.102345076s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5: exit status 7 (769.22238ms)

                                                
                                                
-- stdout --
	ha-401098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401098-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401098-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:40:46.361700   57065 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:40:46.361830   57065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:40:46.361841   57065 out.go:374] Setting ErrFile to fd 2...
	I1228 06:40:46.361846   57065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:40:46.362094   57065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:40:46.362286   57065 out.go:368] Setting JSON to false
	I1228 06:40:46.362318   57065 mustload.go:66] Loading cluster: ha-401098
	I1228 06:40:46.362426   57065 notify.go:221] Checking for updates...
	I1228 06:40:46.362799   57065 config.go:182] Loaded profile config "ha-401098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:40:46.362819   57065 status.go:174] checking status of ha-401098 ...
	I1228 06:40:46.363359   57065 cli_runner.go:164] Run: docker container inspect ha-401098 --format={{.State.Status}}
	I1228 06:40:46.384104   57065 status.go:371] ha-401098 host status = "Running" (err=<nil>)
	I1228 06:40:46.384128   57065 host.go:66] Checking if "ha-401098" exists ...
	I1228 06:40:46.384421   57065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401098
	I1228 06:40:46.405201   57065 host.go:66] Checking if "ha-401098" exists ...
	I1228 06:40:46.405590   57065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:40:46.405641   57065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401098
	I1228 06:40:46.423223   57065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/ha-401098/id_rsa Username:docker}
	I1228 06:40:46.522184   57065 ssh_runner.go:195] Run: systemctl --version
	I1228 06:40:46.528414   57065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:40:46.541976   57065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:40:46.602523   57065 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-28 06:40:46.591241865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:40:46.603172   57065 kubeconfig.go:125] found "ha-401098" server: "https://192.168.49.254:8443"
	I1228 06:40:46.603244   57065 api_server.go:166] Checking apiserver status ...
	I1228 06:40:46.603300   57065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:40:46.618718   57065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	I1228 06:40:46.627793   57065 api_server.go:192] apiserver freezer: "8:freezer:/docker/b212d9a8c35037d4710584057be78fee6116d36808710270fe7bddde22f11439/kubepods/burstable/pod8576f111b1ff388080d65046518011ee/c7d3b56436b320edf6cf79c10310be35b94bc37fd87143f517628190e0aba7ab"
	I1228 06:40:46.627869   57065 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b212d9a8c35037d4710584057be78fee6116d36808710270fe7bddde22f11439/kubepods/burstable/pod8576f111b1ff388080d65046518011ee/c7d3b56436b320edf6cf79c10310be35b94bc37fd87143f517628190e0aba7ab/freezer.state
	I1228 06:40:46.635482   57065 api_server.go:214] freezer state: "THAWED"
	I1228 06:40:46.635508   57065 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:40:46.645229   57065 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:40:46.645259   57065 status.go:463] ha-401098 apiserver status = Running (err=<nil>)
	I1228 06:40:46.645269   57065 status.go:176] ha-401098 status: &{Name:ha-401098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:40:46.645286   57065 status.go:174] checking status of ha-401098-m02 ...
	I1228 06:40:46.645605   57065 cli_runner.go:164] Run: docker container inspect ha-401098-m02 --format={{.State.Status}}
	I1228 06:40:46.664787   57065 status.go:371] ha-401098-m02 host status = "Stopped" (err=<nil>)
	I1228 06:40:46.664813   57065 status.go:384] host is not running, skipping remaining checks
	I1228 06:40:46.664820   57065 status.go:176] ha-401098-m02 status: &{Name:ha-401098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:40:46.664841   57065 status.go:174] checking status of ha-401098-m03 ...
	I1228 06:40:46.665158   57065 cli_runner.go:164] Run: docker container inspect ha-401098-m03 --format={{.State.Status}}
	I1228 06:40:46.682037   57065 status.go:371] ha-401098-m03 host status = "Running" (err=<nil>)
	I1228 06:40:46.682060   57065 host.go:66] Checking if "ha-401098-m03" exists ...
	I1228 06:40:46.682343   57065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401098-m03
	I1228 06:40:46.713516   57065 host.go:66] Checking if "ha-401098-m03" exists ...
	I1228 06:40:46.713841   57065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:40:46.713888   57065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401098-m03
	I1228 06:40:46.744594   57065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/ha-401098-m03/id_rsa Username:docker}
	I1228 06:40:46.850839   57065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:40:46.865694   57065 kubeconfig.go:125] found "ha-401098" server: "https://192.168.49.254:8443"
	I1228 06:40:46.865725   57065 api_server.go:166] Checking apiserver status ...
	I1228 06:40:46.865767   57065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:40:46.878939   57065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I1228 06:40:46.891504   57065 api_server.go:192] apiserver freezer: "8:freezer:/docker/10c4cf8ecc3dcef96d24b8e276c3eaf86c8c85821e0afe9397884eadc087c731/kubepods/burstable/pod41e0a7e2fa7de1abada9b2e376f42323/f5ccaa917dbab0f7d0d2c21f9dcec769b46b13f68494931137192312451f59d7"
	I1228 06:40:46.891579   57065 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/10c4cf8ecc3dcef96d24b8e276c3eaf86c8c85821e0afe9397884eadc087c731/kubepods/burstable/pod41e0a7e2fa7de1abada9b2e376f42323/f5ccaa917dbab0f7d0d2c21f9dcec769b46b13f68494931137192312451f59d7/freezer.state
	I1228 06:40:46.900167   57065 api_server.go:214] freezer state: "THAWED"
	I1228 06:40:46.900195   57065 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:40:46.908798   57065 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:40:46.908877   57065 status.go:463] ha-401098-m03 apiserver status = Running (err=<nil>)
	I1228 06:40:46.908901   57065 status.go:176] ha-401098-m03 status: &{Name:ha-401098-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:40:46.908924   57065 status.go:174] checking status of ha-401098-m04 ...
	I1228 06:40:46.909243   57065 cli_runner.go:164] Run: docker container inspect ha-401098-m04 --format={{.State.Status}}
	I1228 06:40:46.926468   57065 status.go:371] ha-401098-m04 host status = "Running" (err=<nil>)
	I1228 06:40:46.926493   57065 host.go:66] Checking if "ha-401098-m04" exists ...
	I1228 06:40:46.926849   57065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401098-m04
	I1228 06:40:46.944422   57065 host.go:66] Checking if "ha-401098-m04" exists ...
	I1228 06:40:46.944927   57065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:40:46.944975   57065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401098-m04
	I1228 06:40:46.963380   57065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32805 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/ha-401098-m04/id_rsa Username:docker}
	I1228 06:40:47.066355   57065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:40:47.080289   57065 status.go:176] ha-401098-m04 status: &{Name:ha-401098-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 node start m02 --alsologtostderr -v 5: (12.031075259s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5: (1.327318646s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 stop --alsologtostderr -v 5: (37.432504575s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 start --wait true --alsologtostderr -v 5
E1228 06:41:40.814777    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:40.820088    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:40.830416    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:40.850774    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:40.891186    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:40.971633    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:41.132120    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:41.452742    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:42.094418    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:43.374925    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.935565    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:51.056410    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:42:01.296649    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:42:21.777000    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 start --wait true --alsologtostderr -v 5: (58.315228749s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 node delete m03 --alsologtostderr -v 5: (10.459702971s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 stop --alsologtostderr -v 5
E1228 06:43:02.737233    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 stop --alsologtostderr -v 5: (36.061729836s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5: exit status 7 (118.601701ms)

                                                
                                                
-- stdout --
	ha-401098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401098-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:43:26.496120   71707 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:43:26.496318   71707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:43:26.496345   71707 out.go:374] Setting ErrFile to fd 2...
	I1228 06:43:26.496364   71707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:43:26.496693   71707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:43:26.496930   71707 out.go:368] Setting JSON to false
	I1228 06:43:26.496989   71707 mustload.go:66] Loading cluster: ha-401098
	I1228 06:43:26.497078   71707 notify.go:221] Checking for updates...
	I1228 06:43:26.497591   71707 config.go:182] Loaded profile config "ha-401098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:43:26.497628   71707 status.go:174] checking status of ha-401098 ...
	I1228 06:43:26.498154   71707 cli_runner.go:164] Run: docker container inspect ha-401098 --format={{.State.Status}}
	I1228 06:43:26.517374   71707 status.go:371] ha-401098 host status = "Stopped" (err=<nil>)
	I1228 06:43:26.517396   71707 status.go:384] host is not running, skipping remaining checks
	I1228 06:43:26.517404   71707 status.go:176] ha-401098 status: &{Name:ha-401098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:43:26.517443   71707 status.go:174] checking status of ha-401098-m02 ...
	I1228 06:43:26.517823   71707 cli_runner.go:164] Run: docker container inspect ha-401098-m02 --format={{.State.Status}}
	I1228 06:43:26.548486   71707 status.go:371] ha-401098-m02 host status = "Stopped" (err=<nil>)
	I1228 06:43:26.548518   71707 status.go:384] host is not running, skipping remaining checks
	I1228 06:43:26.548525   71707 status.go:176] ha-401098-m02 status: &{Name:ha-401098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:43:26.548544   71707 status.go:174] checking status of ha-401098-m04 ...
	I1228 06:43:26.548844   71707 cli_runner.go:164] Run: docker container inspect ha-401098-m04 --format={{.State.Status}}
	I1228 06:43:26.566730   71707 status.go:371] ha-401098-m04 host status = "Stopped" (err=<nil>)
	I1228 06:43:26.566754   71707 status.go:384] host is not running, skipping remaining checks
	I1228 06:43:26.566761   71707 status.go:176] ha-401098-m04 status: &{Name:ha-401098-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1228 06:44:24.657864    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.359205139s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 node add --control-plane --alsologtostderr -v 5
E1228 06:45:13.022004    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 node add --control-plane --alsologtostderr -v 5: (45.374410185s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-401098 status --alsologtostderr -v 5: (1.113777298s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080949423s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-805640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-805640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (44.243088916s)
--- PASS: TestJSONOutput/start/Command (44.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-805640 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-805640 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-805640 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-805640 --output=json --user=testUser: (5.974494881s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-118022 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-118022 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.448895ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2658bbc3-171f-4418-b561-2b74c076bc5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-118022] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92575a38-93c3-4c17-828e-8c125c542c18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"81db9888-7561-4a14-a51b-bd2e780daf8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"226c23a6-a044-4e6a-a48d-8f25a92129c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig"}}
	{"specversion":"1.0","id":"ea108259-b1f3-4b2c-bc46-8ffa355f8825","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube"}}
	{"specversion":"1.0","id":"bd6c61fe-fb34-4511-a9bb-b0e606ba18c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"05dad376-972d-4adf-8ee7-7514a9e05ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d8f3330-0094-419f-80de-7fcd4757fa38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-118022" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-118022
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-205980 --network=
E1228 06:46:40.820624    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-205980 --network=: (30.828886866s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-205980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-205980
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-205980: (2.240208568s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-363722 --network=bridge
E1228 06:47:08.500589    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-363722 --network=bridge: (26.84293464s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-363722" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-363722
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-363722: (2.073379061s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.94s)

                                                
                                    
x
+
TestKicExistingNetwork (31.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1228 06:47:22.813478    4195 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 06:47:22.829481    4195 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 06:47:22.829565    4195 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1228 06:47:22.829583    4195 cli_runner.go:164] Run: docker network inspect existing-network
W1228 06:47:22.845382    4195 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1228 06:47:22.845411    4195 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1228 06:47:22.845424    4195 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1228 06:47:22.845520    4195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 06:47:22.861879    4195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
I1228 06:47:22.862161    4195 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400035c580}
I1228 06:47:22.862184    4195 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1228 06:47:22.862234    4195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1228 06:47:22.933901    4195 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-923052 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-923052 --network=existing-network: (29.654980685s)
helpers_test.go:176: Cleaning up "existing-network-923052" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-923052
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-923052: (2.096817479s)
I1228 06:47:54.701608    4195 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.91s)

                                                
                                    
x
+
TestKicCustomSubnet (30.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-557378 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-557378 --subnet=192.168.60.0/24: (28.503756284s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-557378 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-557378" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-557378
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-557378: (2.243656633s)
--- PASS: TestKicCustomSubnet (30.78s)

                                                
                                    
x
+
TestKicStaticIP (30.85s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-527948 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-527948 --static-ip=192.168.200.200: (28.528266572s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-527948 ip
helpers_test.go:176: Cleaning up "static-ip-527948" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-527948
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-527948: (2.135200839s)
--- PASS: TestKicStaticIP (30.85s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (61.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-223642 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-223642 --driver=docker  --container-runtime=containerd: (25.513285911s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-226522 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-226522 --driver=docker  --container-runtime=containerd: (29.65342334s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-223642
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-226522
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-226522" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-226522
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-226522: (2.075713737s)
helpers_test.go:176: Cleaning up "first-223642" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-223642
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-223642: (2.327440673s)
--- PASS: TestMinikubeProfile (61.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-630078 --memory=3072 --mount-string /tmp/TestMountStartserial592576321/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-630078 --memory=3072 --mount-string /tmp/TestMountStartserial592576321/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.198778439s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-630078 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-632253 --memory=3072 --mount-string /tmp/TestMountStartserial592576321/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1228 06:50:13.021716    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-632253 --memory=3072 --mount-string /tmp/TestMountStartserial592576321/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.321397713s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-632253 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-630078 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-630078 --alsologtostderr -v=5: (1.673830414s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-632253 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-632253
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-632253: (1.28778936s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-632253
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-632253: (6.398936694s)
--- PASS: TestMountStart/serial/RestartStopped (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-632253 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-764486 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1228 06:51:36.078887    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-764486 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m10.402199283s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- rollout status deployment/busybox
E1228 06:51:40.815451    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-764486 -- rollout status deployment/busybox: (3.445113696s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-pgrfg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-sdsjg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-pgrfg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-sdsjg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-pgrfg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-sdsjg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-pgrfg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-pgrfg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-sdsjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-764486 -- exec busybox-769dd8b7dd-sdsjg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-764486 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-764486 -v=5 --alsologtostderr: (27.213123227s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-764486 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp testdata/cp-test.txt multinode-764486:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2330255659/001/cp-test_multinode-764486.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486:/home/docker/cp-test.txt multinode-764486-m02:/home/docker/cp-test_multinode-764486_multinode-764486-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test_multinode-764486_multinode-764486-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486:/home/docker/cp-test.txt multinode-764486-m03:/home/docker/cp-test_multinode-764486_multinode-764486-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test_multinode-764486_multinode-764486-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp testdata/cp-test.txt multinode-764486-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2330255659/001/cp-test_multinode-764486-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m02:/home/docker/cp-test.txt multinode-764486:/home/docker/cp-test_multinode-764486-m02_multinode-764486.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test_multinode-764486-m02_multinode-764486.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m02:/home/docker/cp-test.txt multinode-764486-m03:/home/docker/cp-test_multinode-764486-m02_multinode-764486-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test_multinode-764486-m02_multinode-764486-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp testdata/cp-test.txt multinode-764486-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2330255659/001/cp-test_multinode-764486-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m03:/home/docker/cp-test.txt multinode-764486:/home/docker/cp-test_multinode-764486-m03_multinode-764486.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486 "sudo cat /home/docker/cp-test_multinode-764486-m03_multinode-764486.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 cp multinode-764486-m03:/home/docker/cp-test.txt multinode-764486-m02:/home/docker/cp-test_multinode-764486-m03_multinode-764486-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 ssh -n multinode-764486-m02 "sudo cat /home/docker/cp-test_multinode-764486-m03_multinode-764486-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-764486 node stop m03: (1.295318242s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-764486 status: exit status 7 (524.233086ms)

                                                
                                                
-- stdout --
	multinode-764486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-764486-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-764486-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr: exit status 7 (531.148367ms)

                                                
                                                
-- stdout --
	multinode-764486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-764486-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-764486-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:52:25.053353  124803 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:52:25.053538  124803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:52:25.053565  124803 out.go:374] Setting ErrFile to fd 2...
	I1228 06:52:25.053583  124803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:52:25.053999  124803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:52:25.054292  124803 out.go:368] Setting JSON to false
	I1228 06:52:25.054344  124803 mustload.go:66] Loading cluster: multinode-764486
	I1228 06:52:25.055076  124803 notify.go:221] Checking for updates...
	I1228 06:52:25.055128  124803 config.go:182] Loaded profile config "multinode-764486": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:52:25.055177  124803 status.go:174] checking status of multinode-764486 ...
	I1228 06:52:25.055925  124803 cli_runner.go:164] Run: docker container inspect multinode-764486 --format={{.State.Status}}
	I1228 06:52:25.074752  124803 status.go:371] multinode-764486 host status = "Running" (err=<nil>)
	I1228 06:52:25.074775  124803 host.go:66] Checking if "multinode-764486" exists ...
	I1228 06:52:25.075079  124803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764486
	I1228 06:52:25.098936  124803 host.go:66] Checking if "multinode-764486" exists ...
	I1228 06:52:25.099261  124803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:52:25.099318  124803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764486
	I1228 06:52:25.117385  124803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/multinode-764486/id_rsa Username:docker}
	I1228 06:52:25.213921  124803 ssh_runner.go:195] Run: systemctl --version
	I1228 06:52:25.220737  124803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:52:25.234101  124803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:52:25.300087  124803 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-28 06:52:25.290203478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:52:25.300709  124803 kubeconfig.go:125] found "multinode-764486" server: "https://192.168.67.2:8443"
	I1228 06:52:25.300754  124803 api_server.go:166] Checking apiserver status ...
	I1228 06:52:25.300813  124803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:52:25.315143  124803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	I1228 06:52:25.324677  124803 api_server.go:192] apiserver freezer: "8:freezer:/docker/c2ac70071bd768d2c081e65a579611b86efffeb58c9bb897696637c9c3c561ea/kubepods/burstable/pod032142cefc01916c7c7c18dd89f9176f/4544fb8486a22fb1a12e68c6d41950179e7e12aa5246573c42ddaa32652a5334"
	I1228 06:52:25.324780  124803 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c2ac70071bd768d2c081e65a579611b86efffeb58c9bb897696637c9c3c561ea/kubepods/burstable/pod032142cefc01916c7c7c18dd89f9176f/4544fb8486a22fb1a12e68c6d41950179e7e12aa5246573c42ddaa32652a5334/freezer.state
	I1228 06:52:25.332851  124803 api_server.go:214] freezer state: "THAWED"
	I1228 06:52:25.332882  124803 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1228 06:52:25.342586  124803 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1228 06:52:25.342617  124803 status.go:463] multinode-764486 apiserver status = Running (err=<nil>)
	I1228 06:52:25.342628  124803 status.go:176] multinode-764486 status: &{Name:multinode-764486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:52:25.342669  124803 status.go:174] checking status of multinode-764486-m02 ...
	I1228 06:52:25.343017  124803 cli_runner.go:164] Run: docker container inspect multinode-764486-m02 --format={{.State.Status}}
	I1228 06:52:25.360839  124803 status.go:371] multinode-764486-m02 host status = "Running" (err=<nil>)
	I1228 06:52:25.360864  124803 host.go:66] Checking if "multinode-764486-m02" exists ...
	I1228 06:52:25.361175  124803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764486-m02
	I1228 06:52:25.381473  124803 host.go:66] Checking if "multinode-764486-m02" exists ...
	I1228 06:52:25.381801  124803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:52:25.381850  124803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764486-m02
	I1228 06:52:25.400914  124803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/multinode-764486-m02/id_rsa Username:docker}
	I1228 06:52:25.497818  124803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:52:25.510552  124803 status.go:176] multinode-764486-m02 status: &{Name:multinode-764486-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:52:25.510588  124803 status.go:174] checking status of multinode-764486-m03 ...
	I1228 06:52:25.510902  124803 cli_runner.go:164] Run: docker container inspect multinode-764486-m03 --format={{.State.Status}}
	I1228 06:52:25.528798  124803 status.go:371] multinode-764486-m03 host status = "Stopped" (err=<nil>)
	I1228 06:52:25.528823  124803 status.go:384] host is not running, skipping remaining checks
	I1228 06:52:25.528831  124803 status.go:176] multinode-764486-m03 status: &{Name:multinode-764486-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-764486 node start m03 -v=5 --alsologtostderr: (6.931728341s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-764486
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-764486
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-764486: (25.079586015s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-764486 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-764486 --wait=true -v=5 --alsologtostderr: (53.722544884s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-764486
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-764486 node delete m03: (4.905866527s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-764486 stop: (23.842781861s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-764486 status: exit status 7 (95.115842ms)

                                                
                                                
-- stdout --
	multinode-764486
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-764486-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr: exit status 7 (82.584067ms)

                                                
                                                
-- stdout --
	multinode-764486
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-764486-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:54:21.738724  133578 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:54:21.738914  133578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:54:21.738941  133578 out.go:374] Setting ErrFile to fd 2...
	I1228 06:54:21.738961  133578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:54:21.739369  133578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:54:21.739653  133578 out.go:368] Setting JSON to false
	I1228 06:54:21.739702  133578 mustload.go:66] Loading cluster: multinode-764486
	I1228 06:54:21.740385  133578 config.go:182] Loaded profile config "multinode-764486": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:54:21.740425  133578 status.go:174] checking status of multinode-764486 ...
	I1228 06:54:21.741207  133578 cli_runner.go:164] Run: docker container inspect multinode-764486 --format={{.State.Status}}
	I1228 06:54:21.742170  133578 notify.go:221] Checking for updates...
	I1228 06:54:21.759099  133578 status.go:371] multinode-764486 host status = "Stopped" (err=<nil>)
	I1228 06:54:21.759120  133578 status.go:384] host is not running, skipping remaining checks
	I1228 06:54:21.759127  133578 status.go:176] multinode-764486 status: &{Name:multinode-764486 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:54:21.759147  133578 status.go:174] checking status of multinode-764486-m02 ...
	I1228 06:54:21.759438  133578 cli_runner.go:164] Run: docker container inspect multinode-764486-m02 --format={{.State.Status}}
	I1228 06:54:21.775404  133578 status.go:371] multinode-764486-m02 host status = "Stopped" (err=<nil>)
	I1228 06:54:21.775424  133578 status.go:384] host is not running, skipping remaining checks
	I1228 06:54:21.775431  133578 status.go:176] multinode-764486-m02 status: &{Name:multinode-764486-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-764486 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-764486 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.427833652s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-764486 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-764486
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-764486-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-764486-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.126798ms)

                                                
                                                
-- stdout --
	* [multinode-764486-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-764486-m02' is duplicated with machine name 'multinode-764486-m02' in profile 'multinode-764486'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-764486-m03 --driver=docker  --container-runtime=containerd
E1228 06:55:13.021538    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-764486-m03 --driver=docker  --container-runtime=containerd: (28.221573251s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-764486
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-764486: exit status 80 (436.669845ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-764486 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-764486-m03 already exists in multinode-764486-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-764486-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-764486-m03: (2.013147424s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.82s)

                                                
                                    
x
+
TestScheduledStopUnix (105.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-018474 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-018474 --memory=3072 --driver=docker  --container-runtime=containerd: (28.34268124s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018474 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:56:15.245994  143075 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:15.246166  143075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:15.246176  143075 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:15.246181  143075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:15.246426  143075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:56:15.246721  143075 out.go:368] Setting JSON to false
	I1228 06:56:15.246839  143075 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:15.247244  143075 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:56:15.247334  143075 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/scheduled-stop-018474/config.json ...
	I1228 06:56:15.247527  143075 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:15.247656  143075 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-018474 -n scheduled-stop-018474
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:56:15.681212  143167 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:15.681368  143167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:15.681377  143167 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:15.681384  143167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:15.681681  143167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:56:15.681971  143167 out.go:368] Setting JSON to false
	I1228 06:56:15.682165  143167 daemonize_unix.go:73] killing process 143091 as it is an old scheduled stop
	I1228 06:56:15.684589  143167 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:15.685049  143167 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:56:15.685178  143167 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/scheduled-stop-018474/config.json ...
	I1228 06:56:15.685578  143167 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:15.685768  143167 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1228 06:56:15.691170    4195 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/scheduled-stop-018474/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018474 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1228 06:56:40.820960    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018474 -n scheduled-stop-018474
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-018474
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-018474 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:56:41.562902  143864 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:41.563046  143864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:41.563074  143864 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:41.563086  143864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:41.563964  143864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 06:56:41.564839  143864 out.go:368] Setting JSON to false
	I1228 06:56:41.566201  143864 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:41.566877  143864 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 06:56:41.567005  143864 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/scheduled-stop-018474/config.json ...
	I1228 06:56:41.567252  143864 mustload.go:66] Loading cluster: scheduled-stop-018474
	I1228 06:56:41.567535  143864 config.go:182] Loaded profile config "scheduled-stop-018474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-018474
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-018474: exit status 7 (68.067874ms)

                                                
                                                
-- stdout --
	scheduled-stop-018474
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018474 -n scheduled-stop-018474
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-018474 -n scheduled-stop-018474: exit status 7 (65.084338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-018474" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-018474
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-018474: (5.290023594s)
--- PASS: TestScheduledStopUnix (105.15s)

                                                
                                    
x
+
TestInsufficientStorage (12.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-499335 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-499335 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.222936835s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9121111d-7bb1-41b3-93c3-a5125477cbcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-499335] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"500649a4-dc82-4961-9bf9-3252df0e0665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"4f181ed2-d7c0-4e88-bb02-61421c9d8a90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8d26f61-1b68-4c47-842c-b5bff5d987c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig"}}
	{"specversion":"1.0","id":"1891d545-41a3-48b8-b5ce-c91e85a9fb97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube"}}
	{"specversion":"1.0","id":"7dec2614-b350-4607-9fba-8e33701e5764","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"efc4a6fa-3e38-4f98-83b4-8cd113b80211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"08505aa7-847a-4b83-9301-0cbff4ed5be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"51a9ed44-8f94-43c7-af91-0f0c7f39e300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4eec34d1-914f-44f9-be64-6950f350849f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1354b1c-bfe4-4a09-90d0-cccbf2eac336","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aa5ca8d9-6de5-4112-921f-ef982ecea260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-499335\" primary control-plane node in \"insufficient-storage-499335\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"673185d2-94c9-4dec-8c29-47af50cba074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766884053-22351 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a95e2c7-97e3-4a68-afea-eda5a5165e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8661dba4-126e-4eea-8c54-38cc400bd36e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-499335 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-499335 --output=json --layout=cluster: exit status 7 (292.614481ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-499335","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-499335","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:42.502940  145692 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-499335" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-499335 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-499335 --output=json --layout=cluster: exit status 7 (286.068621ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-499335","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-499335","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:42.791230  145756 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-499335" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
	E1228 06:57:42.801363  145756 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/insufficient-storage-499335/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-499335" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-499335
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-499335: (1.913429297s)
--- PASS: TestInsufficientStorage (12.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3699257422 start -p running-upgrade-539711 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3699257422 start -p running-upgrade-539711 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (40.862111274s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-539711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1228 07:05:13.021146    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-539711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (11.293593496s)
helpers_test.go:176: Cleaning up "running-upgrade-539711" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-539711
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-539711: (2.152552552s)
--- PASS: TestRunningBinaryUpgrade (55.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (333.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.107812464s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-309476 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-309476 --alsologtostderr: (1.334167888s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-309476 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-309476 status --format={{.Host}}: exit status 7 (67.847142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.875369458s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-309476 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (101.933918ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-309476] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-309476
	    minikube start -p kubernetes-upgrade-309476 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3094762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-309476 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-309476 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (11.377901968s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-309476" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-309476
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-309476: (1.984811465s)
--- PASS: TestKubernetesUpgrade (333.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (124.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.281826365 start -p missing-upgrade-934782 --memory=3072 --driver=docker  --container-runtime=containerd
E1228 06:58:03.861541    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.281826365 start -p missing-upgrade-934782 --memory=3072 --driver=docker  --container-runtime=containerd: (59.881682871s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-934782
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-934782
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-934782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-934782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.909702928s)
helpers_test.go:176: Cleaning up "missing-upgrade-934782" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-934782
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-934782: (2.461487022s)
--- PASS: TestMissingContainerUpgrade (124.10s)

                                                
                                    
x
+
TestPause/serial/Start (56.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-133308 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-133308 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.271579186s)
--- PASS: TestPause/serial/Start (56.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-133308 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-133308 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.786551557s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.80s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-133308 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (311.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2572465182 start -p stopped-upgrade-646225 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1228 07:00:13.021555    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2572465182 start -p stopped-upgrade-646225 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.839363239s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2572465182 -p stopped-upgrade-646225 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2572465182 -p stopped-upgrade-646225 stop: (1.263931134s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-646225 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1228 07:01:40.816490    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-646225 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.544443513s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (311.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-646225
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-646225: (2.002336791s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.00s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (67.12s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-118685 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-118685 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (1m0.328375979s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-118685 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-118685
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-118685: (5.946822717s)
--- PASS: TestPreload/Start-NoPreload-PullImage (67.12s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (48.9s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-118685 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1228 07:06:40.815250    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-118685 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (48.655951336s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-118685 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (48.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (92.3014ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-875097] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-875097 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-875097 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.46558158s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-875097 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (3.636375869s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-875097 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-875097 status -o json: exit status 2 (349.985975ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-875097","Host":"Running","Kubelet":"Stopped","APIServer":"Running","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-875097
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-875097: (2.002791657s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-875097 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.614260638s)
--- PASS: TestNoKubernetes/serial/Start (7.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-875097 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-875097 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.286501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-875097
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-875097: (1.302817227s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-875097 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-875097 --driver=docker  --container-runtime=containerd: (6.375708049s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-875097 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-875097 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.964618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-742569 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-742569 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (596.858383ms)

                                                
                                                
-- stdout --
	* [false-742569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:08:00.255512  195701 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:08:00.256070  195701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:08:00.256136  195701 out.go:374] Setting ErrFile to fd 2...
	I1228 07:08:00.256158  195701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:08:00.256557  195701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
	I1228 07:08:00.257126  195701 out.go:368] Setting JSON to false
	I1228 07:08:00.258105  195701 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3030,"bootTime":1766902650,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1228 07:08:00.258224  195701 start.go:143] virtualization:  
	I1228 07:08:00.266701  195701 out.go:179] * [false-742569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:08:00.288569  195701 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:08:00.299152  195701 notify.go:221] Checking for updates...
	I1228 07:08:00.315635  195701 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:08:00.319643  195701 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
	I1228 07:08:00.335067  195701 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
	I1228 07:08:00.346650  195701 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:08:00.357203  195701 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:08:00.370494  195701 config.go:182] Loaded profile config "force-systemd-env-782848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1228 07:08:00.370627  195701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:08:00.424551  195701 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:08:00.425116  195701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:08:00.564440  195701 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:08:00.550642262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:08:00.564624  195701 docker.go:319] overlay module found
	I1228 07:08:00.567846  195701 out.go:179] * Using the docker driver based on user configuration
	I1228 07:08:00.570868  195701 start.go:309] selected driver: docker
	I1228 07:08:00.570892  195701 start.go:928] validating driver "docker" against <nil>
	I1228 07:08:00.570906  195701 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:08:00.574587  195701 out.go:203] 
	W1228 07:08:00.577471  195701 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1228 07:08:00.580335  195701 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-742569 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-742569

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742569"

                                                
                                                
----------------------- debugLogs end: false-742569 [took: 3.183510546s] --------------------------------
helpers_test.go:176: Cleaning up "false-742569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-742569
--- PASS: TestNetworkPlugins/group/false (3.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1228 07:14:43.862515    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:15:13.022165    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (59.576343579s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-251758 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [14607771-b19a-4ba9-a762-db4a4dbbf8a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [14607771-b19a-4ba9-a762-db4a4dbbf8a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004052088s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-251758 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028333102s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-251758 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-251758 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-251758 --alsologtostderr -v=3: (11.988271532s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-251758 -n old-k8s-version-251758
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-251758 -n old-k8s-version-251758: exit status 7 (65.483456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.757533804s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-251758 -n old-k8s-version-251758
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p26th" [072d95c9-db7e-4046-a9fd-bb6c6286259a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003140059s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p26th" [072d95c9-db7e-4046-a9fd-bb6c6286259a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003157866s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-251758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-251758 image list --format=json
E1228 07:16:40.815116    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.828372109s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-863373 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a7592c16-e8e8-49ef-b151-55394d0b6cfe] Pending
helpers_test.go:353: "busybox" [a7592c16-e8e8-49ef-b151-55394d0b6cfe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a7592c16-e8e8-49ef-b151-55394d0b6cfe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003181858s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-863373 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054194716s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-863373 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-863373 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-863373 --alsologtostderr -v=3: (12.132604599s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863373 -n no-preload-863373
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863373 -n no-preload-863373: exit status 7 (86.511675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (50.503181457s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863373 -n no-preload-863373
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5gjbr" [a8a23b05-1d1d-4df5-8589-cebee9df32f5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023914651s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5gjbr" [a8a23b05-1d1d-4df5-8589-cebee9df32f5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.013560489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-863373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-863373 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (44.95057853s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468470 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [92792392-3d00-4621-8e03-8b78d9dd9fb9] Pending
helpers_test.go:353: "busybox" [92792392-3d00-4621-8e03-8b78d9dd9fb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [92792392-3d00-4621-8e03-8b78d9dd9fb9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004831352s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.250855143s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-468470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-468470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-468470 --alsologtostderr -v=3
E1228 07:20:13.021495    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:17.948115    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:17.953262    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:17.963633    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:17.983969    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:18.024309    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:18.104559    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:18.265259    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:18.586096    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:19.226954    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:20.508027    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:23.068967    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-468470 --alsologtostderr -v=3: (12.281735552s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-468470 -n embed-certs-468470
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-468470 -n embed-certs-468470: exit status 7 (112.137874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-468470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1228 07:20:28.189731    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:20:38.430361    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (50.783798076s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-468470 -n embed-certs-468470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0c5797f8-1aa0-4c3a-8ff6-476efa909ab3] Pending
helpers_test.go:353: "busybox" [0c5797f8-1aa0-4c3a-8ff6-476efa909ab3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0c5797f8-1aa0-4c3a-8ff6-476efa909ab3] Running
E1228 07:20:58.911043    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003829586s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-450028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-450028 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-450028 --alsologtostderr -v=3: (12.204404745s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lg5dt" [c56568a5-400b-4e48-85ea-6eee981ec7bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00441635s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028: exit status 7 (86.036324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-450028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-450028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (48.284984543s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-450028 -n default-k8s-diff-port-450028
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lg5dt" [c56568a5-400b-4e48-85ea-6eee981ec7bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003567507s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-468470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-468470 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (31.675508854s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-99ssq" [9974ed5a-08fa-4623-831a-ce7f833322a7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003445548s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-99ssq" [9974ed5a-08fa-4623-831a-ce7f833322a7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003778799s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-450028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-205774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.240575377s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-205774 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-205774 --alsologtostderr -v=3: (1.432456661s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-205774 -n newest-cni-205774
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-205774 -n newest-cni-205774: exit status 7 (67.291426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-205774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-205774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (18.452834599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-205774 -n newest-cni-205774
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-450028 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.61s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-209495 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-209495 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (5.388322231s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-209495" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-209495
--- PASS: TestPreload/PreloadSrc/gcs (5.61s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (6.68s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-675713 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-675713 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (6.447532941s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-675713" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-675713
--- PASS: TestPreload/PreloadSrc/github (6.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-205774 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.68s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-822090 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-822090" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-822090
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (48.881347732s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1228 07:22:47.631428    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:22:52.752046    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:23:01.792532    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:23:02.992720    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:23:23.473782    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.537573012s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-742569 "pgrep -a kubelet"
I1228 07:23:30.097806    4195 config.go:182] Loaded profile config "auto-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-9q9p2" [90b85082-ca62-4a9f-a82d-b32d833079ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-9q9p2" [90b85082-ca62-4a9f-a82d-b32d833079ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00379813s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-chmtj" [a10c1baf-8f29-4fdd-b2a1-19d97bdcf255] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003806741s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-742569 "pgrep -a kubelet"
I1228 07:23:45.284120    4195 config.go:182] Loaded profile config "kindnet-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-g8986" [b3c784e5-2076-4bac-a023-03b5a75ebb31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-g8986" [b3c784e5-2076-4bac-a023-03b5a75ebb31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004408951s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1228 07:24:04.437480    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/no-preload-863373/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.077272546s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1228 07:24:56.080841    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.472400944s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-rt2lp" [3973cda1-3771-4ad7-b0ef-303f6046b072] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-rt2lp" [3973cda1-3771-4ad7-b0ef-303f6046b072] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003460837s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-742569 "pgrep -a kubelet"
I1228 07:25:12.589553    4195 config.go:182] Loaded profile config "calico-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-f5lb4" [78436fe1-9eff-4162-a9f4-74230d429d56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1228 07:25:13.022061    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/addons-092445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-f5lb4" [78436fe1-9eff-4162-a9f4-74230d429d56] Running
E1228 07:25:17.946673    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/old-k8s-version-251758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003572262s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-742569 "pgrep -a kubelet"
I1228 07:25:15.598682    4195 config.go:182] Loaded profile config "custom-flannel-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mm4ds" [f7a29e0e-966a-45fb-bb50-2acfff204c1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mm4ds" [f7a29e0e-966a-45fb-bb50-2acfff204c1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003988792s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.320389027s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1228 07:25:53.609633    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.614897    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.625166    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.645438    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.685699    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.766145    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:53.927040    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:54.248171    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:54.888422    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:56.168584    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:58.729114    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:26:03.849320    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:26:14.090204    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:26:34.571021    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/default-k8s-diff-port-450028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:26:40.815177    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/functional-243289/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.666278296s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-8f5k5" [b27bb0b0-8ec5-49e2-98c7-9315d9e7220b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003074812s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-742569 "pgrep -a kubelet"
I1228 07:26:54.812298    4195 config.go:182] Loaded profile config "flannel-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-x5hx9" [9ee214f5-84ce-49cc-84e7-e053597b4b21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-x5hx9" [9ee214f5-84ce-49cc-84e7-e053597b4b21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002618846s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-742569 "pgrep -a kubelet"
I1228 07:26:59.103806    4195 config.go:182] Loaded profile config "enable-default-cni-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-2pfc6" [83f9c3b0-1334-4fb0-ac0b-55e7986d9acf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-2pfc6" [83f9c3b0-1334-4fb0-ac0b-55e7986d9acf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004413295s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-742569 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.183624345s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-742569 "pgrep -a kubelet"
I1228 07:28:45.464815    4195 config.go:182] Loaded profile config "bridge-742569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-742569 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-9d9d5" [60e2372f-53aa-44c1-acb9-b33a7e68f6f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-9d9d5" [60e2372f-53aa-44c1-acb9-b33a7e68f6f5] Running
E1228 07:28:49.149698    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/kindnet-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:28:50.874878    4195 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/auto-742569/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003806759s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-742569 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-742569 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-457104 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-457104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-457104
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-120791" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-120791
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-742569 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-742569

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742569"

                                                
                                                
----------------------- debugLogs end: kubenet-742569 [took: 3.162580251s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-742569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-742569
--- SKIP: TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-742569 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-742569" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-742569

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-742569" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742569"

                                                
                                                
----------------------- debugLogs end: cilium-742569 [took: 3.597355111s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-742569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-742569
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
Copied to clipboard