Test Report: Docker_Linux_containerd_arm64 22353

                    
                      dccbb7bb926f2ef30a57d8898bfc971889daa155:2025-12-29:43039
                    
                

Test fail (2/337)

Order failed test Duration
52 TestForceSystemdFlag 504.58
53 TestForceSystemdEnv 508.49
x
+
TestForceSystemdFlag (504.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1229 07:32:59.925165    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:33:30.655767    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m20.690144063s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-275936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-275936" primary control-plane node in "force-systemd-flag-275936" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:31:54.990118  210456 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:31:54.990303  210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:31:54.990335  210456 out.go:374] Setting ErrFile to fd 2...
	I1229 07:31:54.990355  210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:31:54.990732  210456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:31:54.991290  210456 out.go:368] Setting JSON to false
	I1229 07:31:54.992670  210456 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4466,"bootTime":1766989049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 07:31:54.992775  210456 start.go:143] virtualization:  
	I1229 07:31:54.999193  210456 out.go:179] * [force-systemd-flag-275936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:31:55.014374  210456 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:31:55.014524  210456 notify.go:221] Checking for updates...
	I1229 07:31:55.021978  210456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:31:55.025445  210456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:31:55.028900  210456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 07:31:55.032526  210456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:31:55.035779  210456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:31:55.039716  210456 config.go:182] Loaded profile config "force-systemd-env-765623": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:31:55.039858  210456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:31:55.062289  210456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:31:55.062411  210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:31:55.126864  210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.117265138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:31:55.126971  210456 docker.go:319] overlay module found
	I1229 07:31:55.130288  210456 out.go:179] * Using the docker driver based on user configuration
	I1229 07:31:55.133429  210456 start.go:309] selected driver: docker
	I1229 07:31:55.133455  210456 start.go:928] validating driver "docker" against <nil>
	I1229 07:31:55.133470  210456 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:31:55.134222  210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:31:55.189237  210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.17992811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:31:55.189389  210456 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:31:55.189601  210456 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:31:55.192735  210456 out.go:179] * Using Docker driver with root privileges
	I1229 07:31:55.195689  210456 cni.go:84] Creating CNI manager for ""
	I1229 07:31:55.195764  210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:31:55.195784  210456 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:31:55.195864  210456 start.go:353] cluster config:
	{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1229 07:31:55.199033  210456 out.go:179] * Starting "force-systemd-flag-275936" primary control-plane node in "force-systemd-flag-275936" cluster
	I1229 07:31:55.201990  210456 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1229 07:31:55.205087  210456 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:31:55.208135  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:31:55.208186  210456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1229 07:31:55.208196  210456 cache.go:65] Caching tarball of preloaded images
	I1229 07:31:55.208228  210456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:31:55.208280  210456 preload.go:251] Found /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:31:55.208290  210456 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1229 07:31:55.208394  210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
	I1229 07:31:55.208411  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json: {Name:mkce2701c5739928b2701138ece40a77f13e0afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:31:55.235557  210456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:31:55.235583  210456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:31:55.235603  210456 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:31:55.235641  210456 start.go:360] acquireMachinesLock for force-systemd-flag-275936: {Name:mkc1ff8fd971687527ddb66e30c065b7dec5d125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:31:55.235763  210456 start.go:364] duration metric: took 102.705µs to acquireMachinesLock for "force-systemd-flag-275936"
	I1229 07:31:55.235792  210456 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1229 07:31:55.235867  210456 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:31:55.239336  210456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:31:55.239605  210456 start.go:159] libmachine.API.Create for "force-systemd-flag-275936" (driver="docker")
	I1229 07:31:55.239645  210456 client.go:173] LocalClient.Create starting
	I1229 07:31:55.239732  210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
	I1229 07:31:55.239774  210456 main.go:144] libmachine: Decoding PEM data...
	I1229 07:31:55.239790  210456 main.go:144] libmachine: Parsing certificate...
	I1229 07:31:55.239844  210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
	I1229 07:31:55.239866  210456 main.go:144] libmachine: Decoding PEM data...
	I1229 07:31:55.239877  210456 main.go:144] libmachine: Parsing certificate...
	I1229 07:31:55.240246  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:31:55.259118  210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:31:55.259228  210456 network_create.go:284] running [docker network inspect force-systemd-flag-275936] to gather additional debugging logs...
	I1229 07:31:55.259249  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936
	W1229 07:31:55.275676  210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 returned with exit code 1
	I1229 07:31:55.275729  210456 network_create.go:287] error running [docker network inspect force-systemd-flag-275936]: docker network inspect force-systemd-flag-275936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-275936 not found
	I1229 07:31:55.275743  210456 network_create.go:289] output of [docker network inspect force-systemd-flag-275936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-275936 not found
	
	** /stderr **
	I1229 07:31:55.275852  210456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:31:55.295712  210456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
	I1229 07:31:55.296163  210456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
	I1229 07:31:55.296569  210456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
	I1229 07:31:55.297004  210456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c78f904b7647 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:23:10:63:16:dd} reservation:<nil>}
	I1229 07:31:55.297525  210456 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4b020}
	I1229 07:31:55.297549  210456 network_create.go:124] attempt to create docker network force-systemd-flag-275936 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:31:55.297626  210456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-275936 force-systemd-flag-275936
	I1229 07:31:55.356469  210456 network_create.go:108] docker network force-systemd-flag-275936 192.168.85.0/24 created
	I1229 07:31:55.356503  210456 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-275936" container
	I1229 07:31:55.356596  210456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:31:55.372634  210456 cli_runner.go:164] Run: docker volume create force-systemd-flag-275936 --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:31:55.390334  210456 oci.go:103] Successfully created a docker volume force-systemd-flag-275936
	I1229 07:31:55.390428  210456 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-275936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --entrypoint /usr/bin/test -v force-systemd-flag-275936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:31:55.963123  210456 oci.go:107] Successfully prepared a docker volume force-systemd-flag-275936
	I1229 07:31:55.963188  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:31:55.963199  210456 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:31:55.963282  210456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:31:59.824384  210456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.86104634s)
	I1229 07:31:59.824418  210456 kic.go:203] duration metric: took 3.861215926s to extract preloaded images to volume ...
	W1229 07:31:59.824564  210456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:31:59.824685  210456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:31:59.876072  210456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-275936 --name force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-275936 --network force-systemd-flag-275936 --ip 192.168.85.2 --volume force-systemd-flag-275936:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:32:00.556829  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Running}}
	I1229 07:32:00.579290  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:00.610624  210456 cli_runner.go:164] Run: docker exec force-systemd-flag-275936 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:32:00.666102  210456 oci.go:144] the created container "force-systemd-flag-275936" has a running status.
	I1229 07:32:00.666144  210456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa...
	I1229 07:32:00.928093  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:32:00.928158  210456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:32:00.955575  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:00.978812  210456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:32:00.978832  210456 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-275936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:32:01.046827  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:01.063890  210456 machine.go:94] provisionDockerMachine start ...
	I1229 07:32:01.063978  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:01.083021  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:01.083355  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:01.083364  210456 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:32:01.084071  210456 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:32:04.237095  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
	
	I1229 07:32:04.237134  210456 ubuntu.go:182] provisioning hostname "force-systemd-flag-275936"
	I1229 07:32:04.237227  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:04.256216  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:04.256528  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:04.256544  210456 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-275936 && echo "force-systemd-flag-275936" | sudo tee /etc/hostname
	I1229 07:32:04.418929  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
	
	I1229 07:32:04.419007  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:04.446717  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:04.447036  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:04.447059  210456 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-275936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-275936/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-275936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:32:04.609426  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:32:04.609457  210456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
	I1229 07:32:04.609486  210456 ubuntu.go:190] setting up certificates
	I1229 07:32:04.609501  210456 provision.go:84] configureAuth start
	I1229 07:32:04.609566  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:04.626388  210456 provision.go:143] copyHostCerts
	I1229 07:32:04.626430  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:32:04.626466  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
	I1229 07:32:04.626484  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:32:04.626565  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
	I1229 07:32:04.626654  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:32:04.626677  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
	I1229 07:32:04.626681  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:32:04.626716  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
	I1229 07:32:04.626772  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:32:04.626794  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
	I1229 07:32:04.626799  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:32:04.626833  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
	I1229 07:32:04.626893  210456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-275936 san=[127.0.0.1 192.168.85.2 force-systemd-flag-275936 localhost minikube]
	I1229 07:32:05.170037  210456 provision.go:177] copyRemoteCerts
	I1229 07:32:05.170107  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:32:05.170157  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.198376  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.304972  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:32:05.305054  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:32:05.323515  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:32:05.323579  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:32:05.342427  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:32:05.342499  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:32:05.360800  210456 provision.go:87] duration metric: took 751.283522ms to configureAuth
	I1229 07:32:05.360827  210456 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:32:05.361018  210456 config.go:182] Loaded profile config "force-systemd-flag-275936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:32:05.361047  210456 machine.go:97] duration metric: took 4.297134989s to provisionDockerMachine
	I1229 07:32:05.361055  210456 client.go:176] duration metric: took 10.12140189s to LocalClient.Create
	I1229 07:32:05.361075  210456 start.go:167] duration metric: took 10.121472807s to libmachine.API.Create "force-systemd-flag-275936"
	I1229 07:32:05.361083  210456 start.go:293] postStartSetup for "force-systemd-flag-275936" (driver="docker")
	I1229 07:32:05.361091  210456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:32:05.361147  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:32:05.361185  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.380875  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.485408  210456 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:32:05.489100  210456 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:32:05.489170  210456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:32:05.489195  210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
	I1229 07:32:05.489255  210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
	I1229 07:32:05.489343  210456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
	I1229 07:32:05.489355  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /etc/ssl/certs/43522.pem
	I1229 07:32:05.489461  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:32:05.497396  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:32:05.515749  210456 start.go:296] duration metric: took 154.652975ms for postStartSetup
	I1229 07:32:05.516127  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:05.533819  210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
	I1229 07:32:05.534100  210456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:32:05.534159  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.551565  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.654403  210456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:32:05.659341  210456 start.go:128] duration metric: took 10.423458394s to createHost
	I1229 07:32:05.659375  210456 start.go:83] releasing machines lock for "force-systemd-flag-275936", held for 10.423592738s
	I1229 07:32:05.659448  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:05.678420  210456 ssh_runner.go:195] Run: cat /version.json
	I1229 07:32:05.678492  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.678576  210456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:32:05.678642  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.697110  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.710766  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.800849  210456 ssh_runner.go:195] Run: systemctl --version
	I1229 07:32:05.906106  210456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:32:05.913794  210456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:32:05.913886  210456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:32:05.943408  210456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:32:05.943429  210456 start.go:496] detecting cgroup driver to use...
	I1229 07:32:05.943443  210456 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:32:05.943498  210456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:32:05.960297  210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:32:05.975696  210456 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:32:05.975754  210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:32:05.997010  210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:32:06.022997  210456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:32:06.148117  210456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:32:06.280642  210456 docker.go:234] disabling docker service ...
	I1229 07:32:06.280756  210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:32:06.304036  210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:32:06.318700  210456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:32:06.443465  210456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:32:06.572584  210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:32:06.586444  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:32:06.602103  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:32:06.611453  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:32:06.620606  210456 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:32:06.620725  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:32:06.630240  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:32:06.639541  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:32:06.649286  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:32:06.658362  210456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:32:06.667478  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:32:06.677469  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:32:06.687174  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:32:06.696948  210456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:32:06.705434  210456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:32:06.713593  210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:32:06.830071  210456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:32:06.972284  210456 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1229 07:32:06.972372  210456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1229 07:32:06.976442  210456 start.go:574] Will wait 60s for crictl version
	I1229 07:32:06.976556  210456 ssh_runner.go:195] Run: which crictl
	I1229 07:32:06.980543  210456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:32:07.009695  210456 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1229 07:32:07.009824  210456 ssh_runner.go:195] Run: containerd --version
	I1229 07:32:07.032066  210456 ssh_runner.go:195] Run: containerd --version
	I1229 07:32:07.059211  210456 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1229 07:32:07.062242  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:32:07.079092  210456 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:32:07.083157  210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:32:07.093628  210456 kubeadm.go:884] updating cluster {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:32:07.093752  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:32:07.093832  210456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:32:07.119407  210456 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:32:07.119431  210456 containerd.go:542] Images already preloaded, skipping extraction
	I1229 07:32:07.119497  210456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:32:07.144660  210456 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:32:07.144737  210456 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:32:07.144759  210456 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1229 07:32:07.144898  210456 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-275936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:32:07.144994  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1229 07:32:07.174108  210456 cni.go:84] Creating CNI manager for ""
	I1229 07:32:07.174131  210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:32:07.174152  210456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:32:07.174176  210456 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-275936 NodeName:force-systemd-flag-275936 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:32:07.174301  210456 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-275936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:32:07.174374  210456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:32:07.182508  210456 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:32:07.182591  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:32:07.190487  210456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1229 07:32:07.203868  210456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:32:07.217157  210456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1229 07:32:07.229905  210456 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:32:07.233686  210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:32:07.243649  210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:32:07.352826  210456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:32:07.369694  210456 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936 for IP: 192.168.85.2
	I1229 07:32:07.369715  210456 certs.go:195] generating shared ca certs ...
	I1229 07:32:07.369731  210456 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.369899  210456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
	I1229 07:32:07.369954  210456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
	I1229 07:32:07.369966  210456 certs.go:257] generating profile certs ...
	I1229 07:32:07.370034  210456 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key
	I1229 07:32:07.370051  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt with IP's: []
	I1229 07:32:07.651508  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt ...
	I1229 07:32:07.651543  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt: {Name:mkc96444933691c9c7712e10522774b7837acc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.651739  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key ...
	I1229 07:32:07.651754  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key: {Name:mk42aa340448fdd8ef54b06b419e1bc9521849ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.651848  210456 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f
	I1229 07:32:07.651868  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:32:07.848324  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f ...
	I1229 07:32:07.848363  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f: {Name:mk68515dedc39c6aa92cea4b93fb1d928671a1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.848540  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f ...
	I1229 07:32:07.848554  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f: {Name:mk54be500c1ee65f80b3e1b34359ca9c53176eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.848636  210456 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt
	I1229 07:32:07.848714  210456 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key
	I1229 07:32:07.848778  210456 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key
	I1229 07:32:07.848799  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt with IP's: []
	I1229 07:32:07.938444  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt ...
	I1229 07:32:07.938479  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt: {Name:mkdba6db08e3be7cf95db626fb2a49fc799397bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.938677  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key ...
	I1229 07:32:07.938695  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key: {Name:mk5ae15bf1e8cecd3236539da010f90c7a6ecc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.938805  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:32:07.938827  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:32:07.938845  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:32:07.938871  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:32:07.938889  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:32:07.938912  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:32:07.938935  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:32:07.938954  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:32:07.939035  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
	W1229 07:32:07.939082  210456 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
	I1229 07:32:07.939096  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:32:07.939131  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:32:07.939161  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:32:07.939189  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
	I1229 07:32:07.939241  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:32:07.939275  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /usr/share/ca-certificates/43522.pem
	I1229 07:32:07.939292  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:07.939303  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem -> /usr/share/ca-certificates/4352.pem
	I1229 07:32:07.939851  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:32:07.959114  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:32:07.979433  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:32:07.999407  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:32:08.025146  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:32:08.044851  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:32:08.064128  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:32:08.082920  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:32:08.101321  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
	I1229 07:32:08.120009  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:32:08.138483  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
	I1229 07:32:08.156729  210456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:32:08.171909  210456 ssh_runner.go:195] Run: openssl version
	I1229 07:32:08.195409  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.211936  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
	I1229 07:32:08.230603  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.242069  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.242186  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.291052  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:32:08.298917  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:32:08.306719  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.314431  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:32:08.322022  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.325798  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.325862  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.366998  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:32:08.374640  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:32:08.382089  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.389623  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
	I1229 07:32:08.397153  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.400818  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.400884  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.442132  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:32:08.450035  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
	I1229 07:32:08.457563  210456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:32:08.461327  210456 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:32:08.461398  210456 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:32:08.461473  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1229 07:32:08.461536  210456 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:32:08.487727  210456 cri.go:96] found id: ""
	I1229 07:32:08.487799  210456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:32:08.496267  210456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:32:08.504412  210456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:32:08.504475  210456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:32:08.512558  210456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:32:08.512581  210456 kubeadm.go:158] found existing configuration files:
	
	I1229 07:32:08.512658  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:32:08.521258  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:32:08.521347  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:32:08.529140  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:32:08.537528  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:32:08.537643  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:32:08.545668  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:32:08.554110  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:32:08.554178  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:32:08.562121  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:32:08.570646  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:32:08.570735  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:32:08.578306  210456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:32:08.621067  210456 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:32:08.621176  210456 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:32:08.701369  210456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:32:08.701449  210456 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:32:08.701490  210456 kubeadm.go:319] OS: Linux
	I1229 07:32:08.701540  210456 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:32:08.701591  210456 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:32:08.701642  210456 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:32:08.701717  210456 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:32:08.701769  210456 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:32:08.701820  210456 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:32:08.701869  210456 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:32:08.701919  210456 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:32:08.701970  210456 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:32:08.775505  210456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:32:08.775618  210456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:32:08.775723  210456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:32:08.781821  210456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:32:08.788503  210456 out.go:252]   - Generating certificates and keys ...
	I1229 07:32:08.788612  210456 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:32:08.788684  210456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:32:09.057098  210456 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:32:09.418697  210456 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:32:09.572406  210456 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:32:09.643544  210456 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:32:10.339592  210456 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:32:10.339844  210456 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:32:10.482674  210456 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:32:10.483213  210456 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:32:10.795512  210456 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:32:10.975588  210456 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:32:11.248756  210456 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:32:11.248853  210456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:32:11.450295  210456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:32:11.719139  210456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:32:11.898464  210456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:32:12.299659  210456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:32:12.511471  210456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:32:12.512244  210456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:32:12.515181  210456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:32:12.518939  210456 out.go:252]   - Booting up control plane ...
	I1229 07:32:12.519048  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:32:12.519127  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:32:12.519194  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:32:12.536415  210456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:32:12.536828  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:32:12.543945  210456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:32:12.544276  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:32:12.544449  210456 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:32:12.687864  210456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:32:12.687987  210456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:36:12.687849  210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000060915s
	I1229 07:36:12.694469  210456 kubeadm.go:319] 
	I1229 07:36:12.694559  210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:36:12.694595  210456 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:36:12.694725  210456 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:36:12.694731  210456 kubeadm.go:319] 
	I1229 07:36:12.694848  210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:36:12.694906  210456 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:36:12.694944  210456 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:36:12.694949  210456 kubeadm.go:319] 
	I1229 07:36:12.701322  210456 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:36:12.701789  210456 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:36:12.701905  210456 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:36:12.702188  210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:36:12.702194  210456 kubeadm.go:319] 
	I1229 07:36:12.702267  210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:36:12.702503  210456 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000060915s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000060915s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:36:12.702830  210456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1229 07:36:13.130969  210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:36:13.144509  210456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:36:13.144588  210456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:36:13.152734  210456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:36:13.152755  210456 kubeadm.go:158] found existing configuration files:
	
	I1229 07:36:13.152827  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:36:13.161205  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:36:13.161278  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:36:13.168963  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:36:13.177064  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:36:13.177181  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:36:13.185073  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:36:13.192932  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:36:13.192995  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:36:13.200704  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:36:13.208294  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:36:13.208364  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:36:13.215923  210456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:36:13.255474  210456 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:36:13.255540  210456 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:36:13.344915  210456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:36:13.345140  210456 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:36:13.345227  210456 kubeadm.go:319] OS: Linux
	I1229 07:36:13.345315  210456 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:36:13.345400  210456 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:36:13.345502  210456 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:36:13.345585  210456 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:36:13.345684  210456 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:36:13.345799  210456 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:36:13.345901  210456 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:36:13.346009  210456 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:36:13.346104  210456 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:36:13.422164  210456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:36:13.422340  210456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:36:13.422476  210456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:36:13.431759  210456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:36:13.437264  210456 out.go:252]   - Generating certificates and keys ...
	I1229 07:36:13.437454  210456 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:36:13.437542  210456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:36:13.437642  210456 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:36:13.437707  210456 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:36:13.437821  210456 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:36:13.437893  210456 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:36:13.437968  210456 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:36:13.438034  210456 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:36:13.438113  210456 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:36:13.438193  210456 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:36:13.438257  210456 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:36:13.438352  210456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:36:13.634753  210456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:36:14.203935  210456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:36:14.514271  210456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:36:14.708050  210456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:36:14.968546  210456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:36:14.969321  210456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:36:14.972216  210456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:36:14.975375  210456 out.go:252]   - Booting up control plane ...
	I1229 07:36:14.975476  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:36:14.975579  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:36:14.975646  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:36:14.999594  210456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:36:14.999710  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:36:15.032143  210456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:36:15.032250  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:36:15.032290  210456 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:36:15.207023  210456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:36:15.207148  210456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:40:15.207793  210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001145684s
	I1229 07:40:15.207822  210456 kubeadm.go:319] 
	I1229 07:40:15.207881  210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:40:15.207921  210456 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:40:15.208335  210456 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:40:15.208367  210456 kubeadm.go:319] 
	I1229 07:40:15.208562  210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:40:15.208767  210456 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:40:15.208824  210456 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:40:15.208831  210456 kubeadm.go:319] 
	I1229 07:40:15.214036  210456 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:40:15.214541  210456 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:40:15.214683  210456 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:40:15.215072  210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:40:15.215094  210456 kubeadm.go:319] 
	I1229 07:40:15.215202  210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:40:15.215232  210456 kubeadm.go:403] duration metric: took 8m6.753852906s to StartCluster
	I1229 07:40:15.215267  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:40:15.215335  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:40:15.242364  210456 cri.go:96] found id: ""
	I1229 07:40:15.242397  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.242407  210456 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:40:15.242414  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1229 07:40:15.242481  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:40:15.268537  210456 cri.go:96] found id: ""
	I1229 07:40:15.268562  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.268570  210456 logs.go:284] No container was found matching "etcd"
	I1229 07:40:15.268577  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1229 07:40:15.268637  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:40:15.296387  210456 cri.go:96] found id: ""
	I1229 07:40:15.296427  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.296436  210456 logs.go:284] No container was found matching "coredns"
	I1229 07:40:15.296443  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:40:15.296513  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:40:15.322743  210456 cri.go:96] found id: ""
	I1229 07:40:15.322771  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.322784  210456 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:40:15.322792  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:40:15.322868  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:40:15.351562  210456 cri.go:96] found id: ""
	I1229 07:40:15.351598  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.351607  210456 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:40:15.351619  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:40:15.351682  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:40:15.376895  210456 cri.go:96] found id: ""
	I1229 07:40:15.376919  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.376928  210456 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:40:15.376935  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1229 07:40:15.376995  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:40:15.404023  210456 cri.go:96] found id: ""
	I1229 07:40:15.404049  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.404058  210456 logs.go:284] No container was found matching "kindnet"
	I1229 07:40:15.404069  210456 logs.go:123] Gathering logs for dmesg ...
	I1229 07:40:15.404082  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:40:15.418184  210456 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:40:15.418215  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:40:15.484850  210456 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:40:15.476614    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.477057    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478609    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478976    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.480500    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:40:15.476614    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.477057    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478609    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478976    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.480500    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:40:15.484926  210456 logs.go:123] Gathering logs for containerd ...
	I1229 07:40:15.484952  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1229 07:40:15.525775  210456 logs.go:123] Gathering logs for container status ...
	I1229 07:40:15.525809  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:40:15.555955  210456 logs.go:123] Gathering logs for kubelet ...
	I1229 07:40:15.556034  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1229 07:40:15.612571  210456 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:40:15.612644  210456 out.go:285] * 
	* 
	W1229 07:40:15.612696  210456 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:40:15.612713  210456 out.go:285] * 
	* 
	W1229 07:40:15.612962  210456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:40:15.617816  210456 out.go:203] 
	W1229 07:40:15.621782  210456 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:40:15.621867  210456 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:40:15.621888  210456 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:40:15.625797  210456 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-275936 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-29 07:40:16.063897247 +0000 UTC m=+3236.684374726
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-275936
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-275936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c",
	        "Created": "2025-12-29T07:31:59.891142554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:31:59.955774715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/hosts",
	        "LogPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c-json.log",
	        "Name": "/force-systemd-flag-275936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-275936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-275936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c",
	                "LowerDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48-init/diff:/var/lib/docker/overlay2/54d5b7cffc5e9463f8f08189f8469b00e160a6e6f01791a5d6d8fd2d4f288a08/diff",
	                "MergedDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-275936",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-275936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-275936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-275936",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-275936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e50ce24d654b7ce8d54592ef6b0c5b855027bace315a9ff31ab4320b9cfdb634",
	            "SandboxKey": "/var/run/docker/netns/e50ce24d654b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-275936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:30:e2:6b:f6:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4040d5a87c0265f8c13a107209cef1b2ed1391d8349dff186e2e42422778dae",
	                    "EndpointID": "39418bbda9bd50f7adeae3802131531d2f29ea9a847038a1e50125a2b088b07e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-275936",
	                        "bf90252d40a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275936 -n force-systemd-flag-275936
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275936 -n force-systemd-flag-275936: exit status 6 (310.942695ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:40:16.378613  239592 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275936" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-275936 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:34 UTC │ 29 Dec 25 07:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-599664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
	│ stop    │ -p old-k8s-version-599664 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-599664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
	│ start   │ -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:36 UTC │
	│ image   │ old-k8s-version-599664 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ pause   │ -p old-k8s-version-599664 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ unpause │ -p old-k8s-version-599664 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ delete  │ -p old-k8s-version-599664                                                                                                                                                                                                                           │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ delete  │ -p old-k8s-version-599664                                                                                                                                                                                                                           │ old-k8s-version-599664       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ start   │ -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-294279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
	│ stop    │ -p embed-certs-294279 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-294279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
	│ start   │ -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:38 UTC │
	│ image   │ embed-certs-294279 image list --format=json                                                                                                                                                                                                         │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ pause   │ -p embed-certs-294279 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ unpause │ -p embed-certs-294279 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ delete  │ -p embed-certs-294279                                                                                                                                                                                                                               │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ delete  │ -p embed-certs-294279                                                                                                                                                                                                                               │ embed-certs-294279           │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ delete  │ -p disable-driver-mounts-948437                                                                                                                                                                                                                     │ disable-driver-mounts-948437 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-918033            │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p no-preload-918033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-918033            │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │ 29 Dec 25 07:40 UTC │
	│ stop    │ -p no-preload-918033 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-918033            │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │                     │
	│ ssh     │ force-systemd-flag-275936 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-275936    │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │ 29 Dec 25 07:40 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:39:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:39:08.253726  234900 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:39:08.253941  234900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:08.253968  234900 out.go:374] Setting ErrFile to fd 2...
	I1229 07:39:08.253988  234900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:08.254396  234900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:39:08.254946  234900 out.go:368] Setting JSON to false
	I1229 07:39:08.255864  234900 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4899,"bootTime":1766989049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 07:39:08.255984  234900 start.go:143] virtualization:  
	I1229 07:39:08.259925  234900 out.go:179] * [no-preload-918033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:39:08.264254  234900 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:39:08.264327  234900 notify.go:221] Checking for updates...
	I1229 07:39:08.268454  234900 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:39:08.271672  234900 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:39:08.274842  234900 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 07:39:08.277885  234900 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:39:08.280957  234900 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:39:08.284418  234900 config.go:182] Loaded profile config "force-systemd-flag-275936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:39:08.284591  234900 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:39:08.314894  234900 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:39:08.315024  234900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:08.372212  234900 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:39:08.362259697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:08.372319  234900 docker.go:319] overlay module found
	I1229 07:39:08.375573  234900 out.go:179] * Using the docker driver based on user configuration
	I1229 07:39:08.378630  234900 start.go:309] selected driver: docker
	I1229 07:39:08.378661  234900 start.go:928] validating driver "docker" against <nil>
	I1229 07:39:08.378675  234900 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:39:08.379599  234900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:08.435593  234900 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:39:08.426040108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:08.435738  234900 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:39:08.435972  234900 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:39:08.439007  234900 out.go:179] * Using Docker driver with root privileges
	I1229 07:39:08.441942  234900 cni.go:84] Creating CNI manager for ""
	I1229 07:39:08.442009  234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:39:08.442022  234900 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:39:08.442097  234900 start.go:353] cluster config:
	{Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:08.445181  234900 out.go:179] * Starting "no-preload-918033" primary control-plane node in "no-preload-918033" cluster
	I1229 07:39:08.448006  234900 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1229 07:39:08.451038  234900 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:39:08.454039  234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:39:08.454129  234900 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:39:08.454212  234900 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json ...
	I1229 07:39:08.454267  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json: {Name:mk57bc12d2c7e99169d51482e2813f6bee0f00eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:08.454549  234900 cache.go:107] acquiring lock: {Name:mka009506884cbc45a9becd8890cfc8b6acba926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454628  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:39:08.454642  234900 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.697µs
	I1229 07:39:08.454655  234900 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:39:08.454686  234900 cache.go:107] acquiring lock: {Name:mk7d5d886bd09d6d06a4adcb06e83ad6d78e5fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454733  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:39:08.454739  234900 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 61.03µs
	I1229 07:39:08.454745  234900 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:39:08.454746  234900 cache.go:107] acquiring lock: {Name:mk38e6c21d5541b01903a26199dd289b6ff01fd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454764  234900 cache.go:107] acquiring lock: {Name:mk03e847ca15d25512ae766375c2e904a7fd4e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454791  234900 cache.go:107] acquiring lock: {Name:mk1cff5b084f8d2ac170cbb020a0f68379a8bd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454830  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:39:08.454838  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:39:08.454839  234900 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.592µs
	I1229 07:39:08.454847  234900 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:39:08.454844  234900 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 54.13µs
	I1229 07:39:08.454854  234900 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:39:08.454861  234900 cache.go:107] acquiring lock: {Name:mk5ba61770185319c8457b47354fe470903e8a33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454880  234900 cache.go:107] acquiring lock: {Name:mk75305189acd73002b72ce07e1716087e384298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.454893  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:39:08.454899  234900 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 39.96µs
	I1229 07:39:08.454905  234900 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:39:08.454921  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:39:08.454923  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:39:08.454926  234900 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 188.055µs
	I1229 07:39:08.454932  234900 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:39:08.454863  234900 cache.go:107] acquiring lock: {Name:mk67beadb7d0c4522ccf8e2398a82bf1fd7da079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.455011  234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:39:08.455064  234900 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 201.192µs
	I1229 07:39:08.455077  234900 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:39:08.454930  234900 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 51.11µs
	I1229 07:39:08.455084  234900 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:39:08.455122  234900 cache.go:87] Successfully saved all images to host disk.
	I1229 07:39:08.474242  234900 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:39:08.474266  234900 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:39:08.474293  234900 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:39:08.474325  234900 start.go:360] acquireMachinesLock for no-preload-918033: {Name:mkb893b58aed3bac3f457e96e7f679b0befc5a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:08.474434  234900 start.go:364] duration metric: took 89.404µs to acquireMachinesLock for "no-preload-918033"
	I1229 07:39:08.474463  234900 start.go:93] Provisioning new machine with config: &{Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1229 07:39:08.474543  234900 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:39:08.479701  234900 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:39:08.479947  234900 start.go:159] libmachine.API.Create for "no-preload-918033" (driver="docker")
	I1229 07:39:08.479984  234900 client.go:173] LocalClient.Create starting
	I1229 07:39:08.480064  234900 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
	I1229 07:39:08.480103  234900 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:08.480123  234900 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:08.480179  234900 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
	I1229 07:39:08.480202  234900 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:08.480218  234900 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:08.480589  234900 cli_runner.go:164] Run: docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:39:08.496345  234900 cli_runner.go:211] docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:39:08.496439  234900 network_create.go:284] running [docker network inspect no-preload-918033] to gather additional debugging logs...
	I1229 07:39:08.496460  234900 cli_runner.go:164] Run: docker network inspect no-preload-918033
	W1229 07:39:08.512483  234900 cli_runner.go:211] docker network inspect no-preload-918033 returned with exit code 1
	I1229 07:39:08.512516  234900 network_create.go:287] error running [docker network inspect no-preload-918033]: docker network inspect no-preload-918033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-918033 not found
	I1229 07:39:08.512542  234900 network_create.go:289] output of [docker network inspect no-preload-918033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-918033 not found
	
	** /stderr **
	I1229 07:39:08.512646  234900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:08.534586  234900 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
	I1229 07:39:08.535022  234900 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
	I1229 07:39:08.535463  234900 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
	I1229 07:39:08.535972  234900 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06a30}
	I1229 07:39:08.535996  234900 network_create.go:124] attempt to create docker network no-preload-918033 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:39:08.536061  234900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-918033 no-preload-918033
	I1229 07:39:08.592919  234900 network_create.go:108] docker network no-preload-918033 192.168.76.0/24 created
	I1229 07:39:08.592955  234900 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-918033" container
	I1229 07:39:08.593188  234900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:39:08.609569  234900 cli_runner.go:164] Run: docker volume create no-preload-918033 --label name.minikube.sigs.k8s.io=no-preload-918033 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:39:08.627029  234900 oci.go:103] Successfully created a docker volume no-preload-918033
	I1229 07:39:08.627121  234900 cli_runner.go:164] Run: docker run --rm --name no-preload-918033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-918033 --entrypoint /usr/bin/test -v no-preload-918033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:39:09.199812  234900 oci.go:107] Successfully prepared a docker volume no-preload-918033
	I1229 07:39:09.199873  234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	W1229 07:39:09.200009  234900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:39:09.200142  234900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:39:09.263081  234900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-918033 --name no-preload-918033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-918033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-918033 --network no-preload-918033 --ip 192.168.76.2 --volume no-preload-918033:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:39:09.593123  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Running}}
	I1229 07:39:09.626510  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:09.648579  234900 cli_runner.go:164] Run: docker exec no-preload-918033 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:39:09.701824  234900 oci.go:144] the created container "no-preload-918033" has a running status.
	I1229 07:39:09.701851  234900 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa...
	I1229 07:39:09.872417  234900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:39:09.897573  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:09.923332  234900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:39:09.923351  234900 kic_runner.go:114] Args: [docker exec --privileged no-preload-918033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:39:09.970949  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:10.002805  234900 machine.go:94] provisionDockerMachine start ...
	I1229 07:39:10.002903  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:10.030392  234900 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:10.030734  234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1229 07:39:10.030744  234900 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:39:10.031421  234900 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46652->127.0.0.1:33073: read: connection reset by peer
	I1229 07:39:13.184915  234900 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-918033
	
	I1229 07:39:13.184938  234900 ubuntu.go:182] provisioning hostname "no-preload-918033"
	I1229 07:39:13.185018  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:13.207931  234900 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:13.208247  234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1229 07:39:13.208264  234900 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-918033 && echo "no-preload-918033" | sudo tee /etc/hostname
	I1229 07:39:13.370349  234900 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-918033
	
	I1229 07:39:13.370422  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:13.388827  234900 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:13.389192  234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1229 07:39:13.389217  234900 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-918033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-918033/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-918033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:39:13.541382  234900 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:39:13.541470  234900 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
	I1229 07:39:13.541533  234900 ubuntu.go:190] setting up certificates
	I1229 07:39:13.541565  234900 provision.go:84] configureAuth start
	I1229 07:39:13.541676  234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
	I1229 07:39:13.558688  234900 provision.go:143] copyHostCerts
	I1229 07:39:13.558751  234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
	I1229 07:39:13.558759  234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:39:13.558839  234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
	I1229 07:39:13.558935  234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
	I1229 07:39:13.558940  234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:39:13.558966  234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
	I1229 07:39:13.559028  234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
	I1229 07:39:13.559032  234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:39:13.559063  234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
	I1229 07:39:13.559123  234900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.no-preload-918033 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-918033]
	I1229 07:39:13.744999  234900 provision.go:177] copyRemoteCerts
	I1229 07:39:13.745105  234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:39:13.745159  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:13.764839  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:13.873310  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:39:13.891176  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:39:13.908657  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:39:13.927338  234900 provision.go:87] duration metric: took 385.73539ms to configureAuth
	I1229 07:39:13.927386  234900 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:39:13.927679  234900 config.go:182] Loaded profile config "no-preload-918033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:39:13.927693  234900 machine.go:97] duration metric: took 3.924869366s to provisionDockerMachine
	I1229 07:39:13.927707  234900 client.go:176] duration metric: took 5.447715458s to LocalClient.Create
	I1229 07:39:13.927735  234900 start.go:167] duration metric: took 5.447790413s to libmachine.API.Create "no-preload-918033"
	I1229 07:39:13.927746  234900 start.go:293] postStartSetup for "no-preload-918033" (driver="docker")
	I1229 07:39:13.927760  234900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:13.927847  234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:13.927902  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:13.950733  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:14.061186  234900 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:14.064731  234900 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:14.064767  234900 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:39:14.064788  234900 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
	I1229 07:39:14.064855  234900 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
	I1229 07:39:14.064937  234900 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
	I1229 07:39:14.065064  234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:14.072620  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:39:14.090493  234900 start.go:296] duration metric: took 162.731535ms for postStartSetup
	I1229 07:39:14.090867  234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
	I1229 07:39:14.107734  234900 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json ...
	I1229 07:39:14.108023  234900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:14.108072  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:14.125268  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:14.234121  234900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:14.238637  234900 start.go:128] duration metric: took 5.764078763s to createHost
	I1229 07:39:14.238665  234900 start.go:83] releasing machines lock for "no-preload-918033", held for 5.76421762s
	I1229 07:39:14.238734  234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
	I1229 07:39:14.256198  234900 ssh_runner.go:195] Run: cat /version.json
	I1229 07:39:14.256265  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:14.256333  234900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:39:14.256408  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:14.276099  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:14.297296  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:14.478648  234900 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:14.485322  234900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:39:14.489677  234900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:39:14.489781  234900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:14.517950  234900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:39:14.517982  234900 start.go:496] detecting cgroup driver to use...
	I1229 07:39:14.518015  234900 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:39:14.518073  234900 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:39:14.533589  234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:39:14.546296  234900 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:39:14.546357  234900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:39:14.564903  234900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:39:14.585583  234900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:39:14.717173  234900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:39:14.855840  234900 docker.go:234] disabling docker service ...
	I1229 07:39:14.855901  234900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:39:14.878673  234900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:39:14.891815  234900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:39:15.016371  234900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:39:15.151016  234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:39:15.165985  234900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:39:15.182014  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:39:15.191756  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:39:15.201012  234900 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1229 07:39:15.201098  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1229 07:39:15.210978  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:39:15.220009  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:39:15.229423  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:39:15.238426  234900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:39:15.246955  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:39:15.256420  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:39:15.265188  234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:39:15.274901  234900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:39:15.282729  234900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:39:15.290378  234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:15.414362  234900 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:39:15.524937  234900 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1229 07:39:15.525091  234900 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1229 07:39:15.529380  234900 start.go:574] Will wait 60s for crictl version
	I1229 07:39:15.529471  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:15.535572  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:39:15.563427  234900 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1229 07:39:15.563539  234900 ssh_runner.go:195] Run: containerd --version
	I1229 07:39:15.583230  234900 ssh_runner.go:195] Run: containerd --version
	I1229 07:39:15.609806  234900 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1229 07:39:15.612706  234900 cli_runner.go:164] Run: docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:15.630810  234900 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:39:15.634949  234900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:15.645913  234900 kubeadm.go:884] updating cluster {Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:39:15.646043  234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:39:15.646096  234900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:15.672216  234900 containerd.go:631] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1229 07:39:15.672241  234900 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1229 07:39:15.672291  234900 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:15.672495  234900 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:15.672608  234900 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:15.672695  234900 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:15.672799  234900 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:15.672881  234900 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:39:15.672977  234900 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:15.673094  234900 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:15.674469  234900 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:15.674873  234900 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:15.675013  234900 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:15.676181  234900 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:39:15.676656  234900 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:15.676733  234900 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:15.676803  234900 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:15.677101  234900 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:15.993005  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1229 07:39:15.993125  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1229 07:39:15.995200  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0" and sha "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5"
	I1229 07:39:15.995279  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:15.996013  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0" and sha "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856"
	I1229 07:39:15.996083  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:15.999478  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1229 07:39:15.999593  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:16.004311  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0" and sha "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0"
	I1229 07:39:16.004429  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:16.010629  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1229 07:39:16.010748  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:16.016900  234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0" and sha "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f"
	I1229 07:39:16.017080  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:16.053368  234900 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1229 07:39:16.053437  234900 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1229 07:39:16.053510  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.060996  234900 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I1229 07:39:16.061066  234900 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:16.061148  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.080350  234900 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1229 07:39:16.080582  234900 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:16.080639  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.080448  234900 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I1229 07:39:16.080699  234900 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:16.080721  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.080534  234900 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I1229 07:39:16.080749  234900 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:16.080769  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.090692  234900 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1229 07:39:16.090745  234900 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:16.090796  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.090888  234900 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I1229 07:39:16.090909  234900 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:16.090932  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:16.091012  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:39:16.091082  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:16.095055  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:16.095147  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:16.095216  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:16.172466  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:16.172572  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:16.172675  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:16.172783  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:39:16.211542  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:16.211694  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:16.211791  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:16.324447  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:39:16.324614  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:39:16.324759  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:16.324966  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:16.351114  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:39:16.351301  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:39:16.351441  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:39:16.443279  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1229 07:39:16.443412  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:39:16.443526  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:39:16.443577  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1229 07:39:16.443694  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:39:16.443788  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:39:16.475826  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:39:16.476023  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:39:16.476153  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:39:16.476240  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:39:16.476461  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:39:16.476518  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:39:16.503691  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:39:16.503931  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1229 07:39:16.503990  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1229 07:39:16.504023  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:39:16.503775  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1229 07:39:16.504072  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I1229 07:39:16.503804  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1229 07:39:16.504132  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1229 07:39:16.503827  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:39:16.504230  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:39:16.503872  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1229 07:39:16.504307  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I1229 07:39:16.503907  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1229 07:39:16.504381  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I1229 07:39:16.560572  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1229 07:39:16.560606  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I1229 07:39:16.560666  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1229 07:39:16.560675  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1229 07:39:16.615828  234900 containerd.go:286] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1229 07:39:16.615993  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1229 07:39:16.888450  234900 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1229 07:39:16.889178  234900 containerd.go:268] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1229 07:39:16.889519  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:16.933473  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1229 07:39:17.072143  234900 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1229 07:39:17.072242  234900 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:17.072317  234900 ssh_runner.go:195] Run: which crictl
	I1229 07:39:17.136399  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:17.163414  234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:39:17.163486  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:39:17.233522  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:18.693201  234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.529687009s)
	I1229 07:39:18.693232  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1229 07:39:18.693252  234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:39:18.693313  234900 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.459763822s)
	I1229 07:39:18.693419  234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:18.693514  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:39:19.489265  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1229 07:39:19.489295  234900 containerd.go:286] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:39:19.489348  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:39:19.489398  234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:39:19.489495  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:39:20.896744  234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.407367011s)
	I1229 07:39:20.896775  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1229 07:39:20.896794  234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:39:20.896811  234900 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.407270189s)
	I1229 07:39:20.896837  234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1229 07:39:20.896843  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:39:20.896857  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1229 07:39:21.938971  234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0: (1.04210568s)
	I1229 07:39:21.939001  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1229 07:39:21.939019  234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:39:21.939067  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:39:22.997683  234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.058588554s)
	I1229 07:39:22.997715  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1229 07:39:22.997732  234900 containerd.go:286] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:39:22.997784  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:39:24.092872  234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.095058701s)
	I1229 07:39:24.092896  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1229 07:39:24.092916  234900 containerd.go:286] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:39:24.092965  234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:39:24.446064  234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1229 07:39:24.446100  234900 cache_images.go:125] Successfully loaded all cached images
	I1229 07:39:24.446107  234900 cache_images.go:94] duration metric: took 8.773852422s to LoadCachedImages
	I1229 07:39:24.446118  234900 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1229 07:39:24.446252  234900 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-918033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:39:24.446324  234900 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1229 07:39:24.480759  234900 cni.go:84] Creating CNI manager for ""
	I1229 07:39:24.480784  234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:39:24.480802  234900 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:39:24.480824  234900 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-918033 NodeName:no-preload-918033 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:39:24.480945  234900 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-918033"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:39:24.481016  234900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:39:24.490336  234900 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1229 07:39:24.490417  234900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1229 07:39:24.498907  234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I1229 07:39:24.498999  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1229 07:39:24.499075  234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I1229 07:39:24.499107  234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:24.499187  234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I1229 07:39:24.499240  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1229 07:39:24.517100  234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1229 07:39:24.517137  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I1229 07:39:24.517191  234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1229 07:39:24.517207  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I1229 07:39:24.517307  234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1229 07:39:24.529809  234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1229 07:39:24.529851  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I1229 07:39:25.317444  234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:39:25.325087  234900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1229 07:39:25.337985  234900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:39:25.350470  234900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
	I1229 07:39:25.363109  234900 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:39:25.366695  234900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:25.376228  234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:25.492272  234900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:25.511205  234900 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033 for IP: 192.168.76.2
	I1229 07:39:25.511275  234900 certs.go:195] generating shared ca certs ...
	I1229 07:39:25.511307  234900 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:25.511497  234900 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
	I1229 07:39:25.511597  234900 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
	I1229 07:39:25.511624  234900 certs.go:257] generating profile certs ...
	I1229 07:39:25.511712  234900 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key
	I1229 07:39:25.511753  234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt with IP's: []
	I1229 07:39:26.070566  234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt ...
	I1229 07:39:26.070600  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: {Name:mke6fbc75d6afc614594909fcc9f7b2016fab856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.070810  234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key ...
	I1229 07:39:26.070824  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key: {Name:mk60da7d8e2fe6a897d733ec71cb884a5b71061c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.070914  234900 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e
	I1229 07:39:26.070934  234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:39:26.375643  234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e ...
	I1229 07:39:26.375677  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e: {Name:mk6060f23362292301fa85a386b2c4d4f465605b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.376608  234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e ...
	I1229 07:39:26.376635  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e: {Name:mkbbd56e538ac910fad749c8ee68b38982a96952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.376732  234900 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt
	I1229 07:39:26.376812  234900 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key
	I1229 07:39:26.376903  234900 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key
	I1229 07:39:26.376930  234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt with IP's: []
	I1229 07:39:26.491965  234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt ...
	I1229 07:39:26.491995  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt: {Name:mk3e00cbea9bc6558325a122435172afe410ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.492914  234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key ...
	I1229 07:39:26.492932  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key: {Name:mk60bc43cb56c3422562a551db9f0aa1d70c2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:26.494061  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
	W1229 07:39:26.494111  234900 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
	I1229 07:39:26.494120  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:39:26.494150  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:39:26.494177  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:39:26.494204  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
	I1229 07:39:26.494252  234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:39:26.494846  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:39:26.514334  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:39:26.534059  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:39:26.552382  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:39:26.570780  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:39:26.589132  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:39:26.607092  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:39:26.624722  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:39:26.642689  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:39:26.660337  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
	I1229 07:39:26.679042  234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
	I1229 07:39:26.702325  234900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:39:26.717619  234900 ssh_runner.go:195] Run: openssl version
	I1229 07:39:26.724531  234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:26.732985  234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:39:26.741139  234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:26.745568  234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:26.745671  234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:26.786701  234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:39:26.794177  234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:39:26.801695  234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
	I1229 07:39:26.808911  234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
	I1229 07:39:26.816215  234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
	I1229 07:39:26.820022  234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
	I1229 07:39:26.820116  234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
	I1229 07:39:26.860960  234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:39:26.868767  234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
	I1229 07:39:26.876363  234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
	I1229 07:39:26.884070  234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
	I1229 07:39:26.891933  234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
	I1229 07:39:26.895936  234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
	I1229 07:39:26.896041  234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
	I1229 07:39:26.937908  234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:26.950396  234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:26.960765  234900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:39:26.965186  234900 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:39:26.965277  234900 kubeadm.go:401] StartCluster: {Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:26.965400  234900 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1229 07:39:26.965493  234900 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:26.994033  234900 cri.go:96] found id: ""
	I1229 07:39:26.994155  234900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:39:27.004217  234900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:39:27.014161  234900 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:39:27.014233  234900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:39:27.022738  234900 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:39:27.022762  234900 kubeadm.go:158] found existing configuration files:
	
	I1229 07:39:27.022846  234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:39:27.031228  234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:39:27.031294  234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:39:27.039160  234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:39:27.047307  234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:39:27.047372  234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:39:27.054791  234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:39:27.062575  234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:39:27.062640  234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:39:27.070183  234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:39:27.078378  234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:39:27.078448  234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:39:27.086083  234900 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:39:27.207418  234900 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:39:27.207859  234900 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:39:27.275636  234900 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:39:39.812698  234900 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:39:39.812760  234900 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:39:39.812848  234900 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:39:39.812904  234900 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:39:39.812937  234900 kubeadm.go:319] OS: Linux
	I1229 07:39:39.812984  234900 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:39:39.813064  234900 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:39:39.813134  234900 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:39:39.813199  234900 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:39:39.813249  234900 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:39:39.813306  234900 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:39:39.813355  234900 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:39:39.813414  234900 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:39:39.813471  234900 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:39:39.813552  234900 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:39:39.813647  234900 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:39:39.813739  234900 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:39:39.813804  234900 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:39:39.816934  234900 out.go:252]   - Generating certificates and keys ...
	I1229 07:39:39.817038  234900 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:39:39.817122  234900 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:39:39.817193  234900 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:39:39.817253  234900 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:39:39.817316  234900 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:39:39.817375  234900 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:39:39.817436  234900 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:39:39.817560  234900 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-918033] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:39:39.817615  234900 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:39:39.817736  234900 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-918033] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:39:39.817804  234900 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:39:39.817874  234900 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:39:39.817921  234900 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:39:39.817995  234900 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:39:39.818049  234900 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:39:39.818109  234900 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:39:39.818171  234900 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:39:39.818236  234900 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:39:39.818292  234900 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:39:39.818377  234900 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:39:39.818446  234900 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:39:39.821341  234900 out.go:252]   - Booting up control plane ...
	I1229 07:39:39.821456  234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:39:39.821540  234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:39:39.821610  234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:39:39.821720  234900 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:39:39.821818  234900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:39:39.821956  234900 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:39:39.822056  234900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:39:39.822105  234900 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:39:39.822241  234900 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:39:39.822350  234900 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:39:39.822430  234900 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001367535s
	I1229 07:39:39.822527  234900 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:39:39.822611  234900 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1229 07:39:39.822725  234900 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:39:39.822808  234900 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:39:39.822888  234900 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.013328603s
	I1229 07:39:39.822965  234900 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.859441554s
	I1229 07:39:39.823035  234900 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003418811s
	I1229 07:39:39.823144  234900 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:39:39.823271  234900 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:39:39.823345  234900 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:39:39.823538  234900 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-918033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:39:39.823598  234900 kubeadm.go:319] [bootstrap-token] Using token: 4w53e1.2t0pp28sxdrefpmi
	I1229 07:39:39.826587  234900 out.go:252]   - Configuring RBAC rules ...
	I1229 07:39:39.826714  234900 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:39:39.826805  234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:39:39.826949  234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:39:39.827080  234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:39:39.827205  234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:39:39.827296  234900 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:39:39.827415  234900 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:39:39.827461  234900 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:39:39.827511  234900 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:39:39.827519  234900 kubeadm.go:319] 
	I1229 07:39:39.827583  234900 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:39:39.827591  234900 kubeadm.go:319] 
	I1229 07:39:39.827669  234900 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:39:39.827676  234900 kubeadm.go:319] 
	I1229 07:39:39.827701  234900 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:39:39.827763  234900 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:39:39.827823  234900 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:39:39.827830  234900 kubeadm.go:319] 
	I1229 07:39:39.827901  234900 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:39:39.827908  234900 kubeadm.go:319] 
	I1229 07:39:39.827956  234900 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:39:39.827964  234900 kubeadm.go:319] 
	I1229 07:39:39.828015  234900 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:39:39.828093  234900 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:39:39.828164  234900 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:39:39.828170  234900 kubeadm.go:319] 
	I1229 07:39:39.828254  234900 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:39:39.828334  234900 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:39:39.828341  234900 kubeadm.go:319] 
	I1229 07:39:39.828426  234900 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4w53e1.2t0pp28sxdrefpmi \
	I1229 07:39:39.828532  234900 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d98392a0db18aee16ce0424e6d823438ce761b4275760bd1e31f17fdc46df4c0 \
	I1229 07:39:39.828555  234900 kubeadm.go:319] 	--control-plane 
	I1229 07:39:39.828562  234900 kubeadm.go:319] 
	I1229 07:39:39.828646  234900 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:39:39.828653  234900 kubeadm.go:319] 
	I1229 07:39:39.828735  234900 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4w53e1.2t0pp28sxdrefpmi \
	I1229 07:39:39.828854  234900 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d98392a0db18aee16ce0424e6d823438ce761b4275760bd1e31f17fdc46df4c0 
	I1229 07:39:39.828869  234900 cni.go:84] Creating CNI manager for ""
	I1229 07:39:39.828924  234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:39:39.834032  234900 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:39:39.836991  234900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:39:39.841281  234900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:39:39.841305  234900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:39:39.855055  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:39:40.160605  234900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:39:40.160670  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:40.160736  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-918033 minikube.k8s.io/updated_at=2025_12_29T07_39_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=no-preload-918033 minikube.k8s.io/primary=true
	I1229 07:39:40.178903  234900 ops.go:34] apiserver oom_adj: -16
	I1229 07:39:40.381739  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:40.882389  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:41.382278  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:41.881856  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:42.381885  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:42.882278  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:43.382857  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:43.882715  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:44.382653  234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:39:44.466623  234900 kubeadm.go:1114] duration metric: took 4.305997311s to wait for elevateKubeSystemPrivileges
	I1229 07:39:44.466661  234900 kubeadm.go:403] duration metric: took 17.501387599s to StartCluster
	I1229 07:39:44.466678  234900 settings.go:142] acquiring lock: {Name:mkbb5f02ec6801af9f7806fd554ca9cee95eb430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:44.466760  234900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:39:44.467357  234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/kubeconfig: {Name:mk79bef4549b8f63fb70afbc722117a9e75f76e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:44.467603  234900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:39:44.467620  234900 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1229 07:39:44.467863  234900 config.go:182] Loaded profile config "no-preload-918033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:39:44.467902  234900 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:39:44.467964  234900 addons.go:70] Setting storage-provisioner=true in profile "no-preload-918033"
	I1229 07:39:44.467996  234900 addons.go:239] Setting addon storage-provisioner=true in "no-preload-918033"
	I1229 07:39:44.468018  234900 addons.go:70] Setting default-storageclass=true in profile "no-preload-918033"
	I1229 07:39:44.468049  234900 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-918033"
	I1229 07:39:44.468020  234900 host.go:66] Checking if "no-preload-918033" exists ...
	I1229 07:39:44.468391  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:44.468586  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:44.473507  234900 out.go:179] * Verifying Kubernetes components...
	I1229 07:39:44.476423  234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:44.498822  234900 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:39:44.502945  234900 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:39:44.502968  234900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:39:44.503033  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:44.512379  234900 addons.go:239] Setting addon default-storageclass=true in "no-preload-918033"
	I1229 07:39:44.512416  234900 host.go:66] Checking if "no-preload-918033" exists ...
	I1229 07:39:44.512832  234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
	I1229 07:39:44.539705  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:44.562188  234900 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:39:44.562217  234900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:39:44.562278  234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
	I1229 07:39:44.586778  234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
	I1229 07:39:44.695376  234900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:39:44.842331  234900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:45.067857  234900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:39:45.120022  234900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:39:45.501596  234900 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1229 07:39:45.504035  234900 node_ready.go:35] waiting up to 6m0s for node "no-preload-918033" to be "Ready" ...
	I1229 07:39:45.978046  234900 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:39:45.982596  234900 addons.go:530] duration metric: took 1.514686547s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:39:46.008402  234900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-918033" context rescaled to 1 replicas
	W1229 07:39:47.508703  234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
	W1229 07:39:50.012135  234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
	W1229 07:39:52.507472  234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
	W1229 07:39:54.507789  234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
	W1229 07:39:57.009113  234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
	I1229 07:39:58.009600  234900 node_ready.go:49] node "no-preload-918033" is "Ready"
	I1229 07:39:58.009635  234900 node_ready.go:38] duration metric: took 12.505569558s for node "no-preload-918033" to be "Ready" ...
	I1229 07:39:58.009649  234900 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:39:58.009707  234900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:39:58.028409  234900 api_server.go:72] duration metric: took 13.560758885s to wait for apiserver process to appear ...
	I1229 07:39:58.028439  234900 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:39:58.028466  234900 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:58.037326  234900 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1229 07:39:58.038412  234900 api_server.go:141] control plane version: v1.35.0
	I1229 07:39:58.038444  234900 api_server.go:131] duration metric: took 9.99739ms to wait for apiserver health ...
	I1229 07:39:58.038454  234900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:39:58.042215  234900 system_pods.go:59] 8 kube-system pods found
	I1229 07:39:58.042255  234900 system_pods.go:61] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:58.042262  234900 system_pods.go:61] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
	I1229 07:39:58.042268  234900 system_pods.go:61] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
	I1229 07:39:58.042273  234900 system_pods.go:61] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
	I1229 07:39:58.042279  234900 system_pods.go:61] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
	I1229 07:39:58.042283  234900 system_pods.go:61] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
	I1229 07:39:58.042288  234900 system_pods.go:61] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
	I1229 07:39:58.042293  234900 system_pods.go:61] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:39:58.042299  234900 system_pods.go:74] duration metric: took 3.839508ms to wait for pod list to return data ...
	I1229 07:39:58.042312  234900 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:39:58.046139  234900 default_sa.go:45] found service account: "default"
	I1229 07:39:58.046168  234900 default_sa.go:55] duration metric: took 3.847622ms for default service account to be created ...
	I1229 07:39:58.046180  234900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:39:58.053376  234900 system_pods.go:86] 8 kube-system pods found
	I1229 07:39:58.053410  234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:58.053418  234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
	I1229 07:39:58.053443  234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
	I1229 07:39:58.053454  234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
	I1229 07:39:58.053460  234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
	I1229 07:39:58.053465  234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
	I1229 07:39:58.053476  234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
	I1229 07:39:58.053483  234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:39:58.053524  234900 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:39:58.309277  234900 system_pods.go:86] 8 kube-system pods found
	I1229 07:39:58.309314  234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:58.309331  234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
	I1229 07:39:58.309337  234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
	I1229 07:39:58.309343  234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
	I1229 07:39:58.309348  234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
	I1229 07:39:58.309353  234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
	I1229 07:39:58.309361  234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
	I1229 07:39:58.309368  234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:39:58.642095  234900 system_pods.go:86] 8 kube-system pods found
	I1229 07:39:58.642128  234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Running
	I1229 07:39:58.642136  234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
	I1229 07:39:58.642141  234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
	I1229 07:39:58.642145  234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
	I1229 07:39:58.642151  234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
	I1229 07:39:58.642155  234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
	I1229 07:39:58.642163  234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
	I1229 07:39:58.642167  234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Running
	I1229 07:39:58.642174  234900 system_pods.go:126] duration metric: took 595.98942ms to wait for k8s-apps to be running ...
	I1229 07:39:58.642187  234900 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:39:58.642245  234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:58.657180  234900 system_svc.go:56] duration metric: took 14.983182ms WaitForService to wait for kubelet
	I1229 07:39:58.657208  234900 kubeadm.go:587] duration metric: took 14.189562627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:39:58.657227  234900 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:39:58.659993  234900 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:39:58.660023  234900 node_conditions.go:123] node cpu capacity is 2
	I1229 07:39:58.660037  234900 node_conditions.go:105] duration metric: took 2.805363ms to run NodePressure ...
	I1229 07:39:58.660051  234900 start.go:242] waiting for startup goroutines ...
	I1229 07:39:58.660058  234900 start.go:247] waiting for cluster config update ...
	I1229 07:39:58.660069  234900 start.go:256] writing updated cluster config ...
	I1229 07:39:58.660370  234900 ssh_runner.go:195] Run: rm -f paused
	I1229 07:39:58.664201  234900 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:39:58.668007  234900 pod_ready.go:83] waiting for pod "coredns-7d764666f9-4s98b" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.673140  234900 pod_ready.go:94] pod "coredns-7d764666f9-4s98b" is "Ready"
	I1229 07:39:58.673208  234900 pod_ready.go:86] duration metric: took 5.17018ms for pod "coredns-7d764666f9-4s98b" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.675801  234900 pod_ready.go:83] waiting for pod "etcd-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.680341  234900 pod_ready.go:94] pod "etcd-no-preload-918033" is "Ready"
	I1229 07:39:58.680365  234900 pod_ready.go:86] duration metric: took 4.538305ms for pod "etcd-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.682928  234900 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.687615  234900 pod_ready.go:94] pod "kube-apiserver-no-preload-918033" is "Ready"
	I1229 07:39:58.687641  234900 pod_ready.go:86] duration metric: took 4.686975ms for pod "kube-apiserver-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:58.690252  234900 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:59.068169  234900 pod_ready.go:94] pod "kube-controller-manager-no-preload-918033" is "Ready"
	I1229 07:39:59.068195  234900 pod_ready.go:86] duration metric: took 377.913348ms for pod "kube-controller-manager-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:59.268858  234900 pod_ready.go:83] waiting for pod "kube-proxy-jc85q" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:59.667690  234900 pod_ready.go:94] pod "kube-proxy-jc85q" is "Ready"
	I1229 07:39:59.667716  234900 pod_ready.go:86] duration metric: took 398.82863ms for pod "kube-proxy-jc85q" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:59.868234  234900 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:40:00.274981  234900 pod_ready.go:94] pod "kube-scheduler-no-preload-918033" is "Ready"
	I1229 07:40:00.275067  234900 pod_ready.go:86] duration metric: took 406.802719ms for pod "kube-scheduler-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:40:00.275099  234900 pod_ready.go:40] duration metric: took 1.610863516s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:40:00.559168  234900 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:40:00.588169  234900 out.go:203] 
	W1229 07:40:00.591232  234900 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:40:00.596969  234900 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:40:00.602529  234900 out.go:179] * Done! kubectl is now configured to use "no-preload-918033" cluster and "default" namespace by default
	I1229 07:40:15.207793  210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001145684s
	I1229 07:40:15.207822  210456 kubeadm.go:319] 
	I1229 07:40:15.207881  210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:40:15.207921  210456 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:40:15.208335  210456 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:40:15.208367  210456 kubeadm.go:319] 
	I1229 07:40:15.208562  210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:40:15.208767  210456 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:40:15.208824  210456 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:40:15.208831  210456 kubeadm.go:319] 
	I1229 07:40:15.214036  210456 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:40:15.214541  210456 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:40:15.214683  210456 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:40:15.215072  210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:40:15.215094  210456 kubeadm.go:319] 
	I1229 07:40:15.215202  210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:40:15.215232  210456 kubeadm.go:403] duration metric: took 8m6.753852906s to StartCluster
	I1229 07:40:15.215267  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:40:15.215335  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:40:15.242364  210456 cri.go:96] found id: ""
	I1229 07:40:15.242397  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.242407  210456 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:40:15.242414  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1229 07:40:15.242481  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:40:15.268537  210456 cri.go:96] found id: ""
	I1229 07:40:15.268562  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.268570  210456 logs.go:284] No container was found matching "etcd"
	I1229 07:40:15.268577  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1229 07:40:15.268637  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:40:15.296387  210456 cri.go:96] found id: ""
	I1229 07:40:15.296427  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.296436  210456 logs.go:284] No container was found matching "coredns"
	I1229 07:40:15.296443  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:40:15.296513  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:40:15.322743  210456 cri.go:96] found id: ""
	I1229 07:40:15.322771  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.322784  210456 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:40:15.322792  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:40:15.322868  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:40:15.351562  210456 cri.go:96] found id: ""
	I1229 07:40:15.351598  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.351607  210456 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:40:15.351619  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:40:15.351682  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:40:15.376895  210456 cri.go:96] found id: ""
	I1229 07:40:15.376919  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.376928  210456 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:40:15.376935  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1229 07:40:15.376995  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:40:15.404023  210456 cri.go:96] found id: ""
	I1229 07:40:15.404049  210456 logs.go:282] 0 containers: []
	W1229 07:40:15.404058  210456 logs.go:284] No container was found matching "kindnet"
	I1229 07:40:15.404069  210456 logs.go:123] Gathering logs for dmesg ...
	I1229 07:40:15.404082  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:40:15.418184  210456 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:40:15.418215  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:40:15.484850  210456 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:40:15.476614    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.477057    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478609    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478976    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.480500    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:40:15.476614    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.477057    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478609    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.478976    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:15.480500    4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:40:15.484926  210456 logs.go:123] Gathering logs for containerd ...
	I1229 07:40:15.484952  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1229 07:40:15.525775  210456 logs.go:123] Gathering logs for container status ...
	I1229 07:40:15.525809  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:40:15.555955  210456 logs.go:123] Gathering logs for kubelet ...
	I1229 07:40:15.556034  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1229 07:40:15.612571  210456 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:40:15.612644  210456 out.go:285] * 
	W1229 07:40:15.612696  210456 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:40:15.612713  210456 out.go:285] * 
	W1229 07:40:15.612962  210456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:40:15.617816  210456 out.go:203] 
	W1229 07:40:15.621782  210456 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001145684s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:40:15.621867  210456 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:40:15.621888  210456 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:40:15.625797  210456 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915470796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915537947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915648455Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915719759Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915789076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915854571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915911572Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915973998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916040657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916129323Z" level=info msg="Connect containerd service"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916546756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.917238801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927329274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927393693Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927420491Z" level=info msg="Start subscribing containerd event"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927477230Z" level=info msg="Start recovering state"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968265850Z" level=info msg="Start event monitor"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968458919Z" level=info msg="Start cni network conf syncer for default"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968526727Z" level=info msg="Start streaming server"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968593748Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968654524Z" level=info msg="runtime interface starting up..."
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968711657Z" level=info msg="starting plugins..."
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968776224Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:32:06 force-systemd-flag-275936 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.971026262Z" level=info msg="containerd successfully booted in 0.082352s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:40:17.017692    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:17.018828    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:17.019906    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:17.020639    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:40:17.021717    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec29 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014780] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.558389] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034938] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.769839] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.300699] kauditd_printk_skb: 39 callbacks suppressed
	[Dec29 07:00] hrtimer: interrupt took 19167915 ns
	
	
	==> kernel <==
	 07:40:17 up  1:22,  0 user,  load average: 1.63, 1.68, 2.08
	Linux force-systemd-flag-275936 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:40:13 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:14 force-systemd-flag-275936 kubelet[4733]: E1229 07:40:14.224114    4733 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:14 force-systemd-flag-275936 kubelet[4739]: E1229 07:40:14.980376    4739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:15 force-systemd-flag-275936 kubelet[4823]: E1229 07:40:15.744042    4823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:40:16 force-systemd-flag-275936 kubelet[4852]: E1229 07:40:16.510038    4852 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275936 -n force-systemd-flag-275936
E1229 07:40:17.495793    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.501866    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.513327    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.534293    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275936 -n force-systemd-flag-275936: exit status 6 (367.29621ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:40:17.520862  239811 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275936" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-275936" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-275936" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-275936
E1229 07:40:17.575401    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.655742    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.816165    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:18.136730    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:18.777701    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-275936: (1.993765652s)
--- FAIL: TestForceSystemdFlag (504.58s)

                                                
                                    
x
+
TestForceSystemdEnv (508.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-765623 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-765623 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m24.229839815s)

                                                
                                                
-- stdout --
	* [force-systemd-env-765623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-765623" primary control-plane node in "force-systemd-env-765623" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:25:17.084149  189119 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:25:17.084286  189119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:25:17.084292  189119 out.go:374] Setting ErrFile to fd 2...
	I1229 07:25:17.084297  189119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:25:17.084695  189119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:25:17.085293  189119 out.go:368] Setting JSON to false
	I1229 07:25:17.086193  189119 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4068,"bootTime":1766989049,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 07:25:17.086290  189119 start.go:143] virtualization:  
	I1229 07:25:17.093878  189119 out.go:179] * [force-systemd-env-765623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:25:17.097512  189119 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:25:17.097714  189119 notify.go:221] Checking for updates...
	I1229 07:25:17.104691  189119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:25:17.108089  189119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:25:17.111472  189119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 07:25:17.114659  189119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:25:17.117671  189119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1229 07:25:17.121386  189119 config.go:182] Loaded profile config "test-preload-458991": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:25:17.121499  189119 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:25:17.165604  189119 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:25:17.165708  189119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:25:17.272989  189119 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 07:25:17.260679281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:25:17.273241  189119 docker.go:319] overlay module found
	I1229 07:25:17.276937  189119 out.go:179] * Using the docker driver based on user configuration
	I1229 07:25:17.279959  189119 start.go:309] selected driver: docker
	I1229 07:25:17.279980  189119 start.go:928] validating driver "docker" against <nil>
	I1229 07:25:17.279993  189119 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:25:17.280696  189119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:25:17.358334  189119 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 07:25:17.347709463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:25:17.358484  189119 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:25:17.358698  189119 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:25:17.362330  189119 out.go:179] * Using Docker driver with root privileges
	I1229 07:25:17.365440  189119 cni.go:84] Creating CNI manager for ""
	I1229 07:25:17.365507  189119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:25:17.365517  189119 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:25:17.365600  189119 start.go:353] cluster config:
	{Name:force-systemd-env-765623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-765623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:25:17.368816  189119 out.go:179] * Starting "force-systemd-env-765623" primary control-plane node in "force-systemd-env-765623" cluster
	I1229 07:25:17.371884  189119 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1229 07:25:17.374880  189119 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:25:17.377975  189119 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:25:17.378018  189119 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1229 07:25:17.378029  189119 cache.go:65] Caching tarball of preloaded images
	I1229 07:25:17.378123  189119 preload.go:251] Found /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:25:17.378132  189119 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1229 07:25:17.378239  189119 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/config.json ...
	I1229 07:25:17.378257  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/config.json: {Name:mk748b3bca0d4b0f9fcd30139620e8a6eac95ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:17.378400  189119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:25:17.402203  189119 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:25:17.402231  189119 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:25:17.402248  189119 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:25:17.402282  189119 start.go:360] acquireMachinesLock for force-systemd-env-765623: {Name:mk7c2a9e19b4870cda1e69d03bdb7c9b7f653c8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:25:17.402376  189119 start.go:364] duration metric: took 79.615µs to acquireMachinesLock for "force-systemd-env-765623"
	I1229 07:25:17.402401  189119 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-765623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-765623 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1229 07:25:17.402463  189119 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:25:17.406187  189119 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:25:17.406441  189119 start.go:159] libmachine.API.Create for "force-systemd-env-765623" (driver="docker")
	I1229 07:25:17.406472  189119 client.go:173] LocalClient.Create starting
	I1229 07:25:17.406527  189119 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
	I1229 07:25:17.406558  189119 main.go:144] libmachine: Decoding PEM data...
	I1229 07:25:17.406573  189119 main.go:144] libmachine: Parsing certificate...
	I1229 07:25:17.406621  189119 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
	I1229 07:25:17.406638  189119 main.go:144] libmachine: Decoding PEM data...
	I1229 07:25:17.406654  189119 main.go:144] libmachine: Parsing certificate...
	I1229 07:25:17.407009  189119 cli_runner.go:164] Run: docker network inspect force-systemd-env-765623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:25:17.427730  189119 cli_runner.go:211] docker network inspect force-systemd-env-765623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:25:17.427818  189119 network_create.go:284] running [docker network inspect force-systemd-env-765623] to gather additional debugging logs...
	I1229 07:25:17.427841  189119 cli_runner.go:164] Run: docker network inspect force-systemd-env-765623
	W1229 07:25:17.445975  189119 cli_runner.go:211] docker network inspect force-systemd-env-765623 returned with exit code 1
	I1229 07:25:17.446007  189119 network_create.go:287] error running [docker network inspect force-systemd-env-765623]: docker network inspect force-systemd-env-765623: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-765623 not found
	I1229 07:25:17.446033  189119 network_create.go:289] output of [docker network inspect force-systemd-env-765623]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-765623 not found
	
	** /stderr **
	I1229 07:25:17.446140  189119 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:25:17.470583  189119 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
	I1229 07:25:17.470948  189119 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
	I1229 07:25:17.471295  189119 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
	I1229 07:25:17.471735  189119 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fdaa0}
	I1229 07:25:17.471756  189119 network_create.go:124] attempt to create docker network force-systemd-env-765623 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:25:17.471810  189119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-765623 force-systemd-env-765623
	I1229 07:25:17.549536  189119 network_create.go:108] docker network force-systemd-env-765623 192.168.76.0/24 created
	I1229 07:25:17.549565  189119 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-765623" container
	I1229 07:25:17.549645  189119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:25:17.570727  189119 cli_runner.go:164] Run: docker volume create force-systemd-env-765623 --label name.minikube.sigs.k8s.io=force-systemd-env-765623 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:25:17.595614  189119 oci.go:103] Successfully created a docker volume force-systemd-env-765623
	I1229 07:25:17.595709  189119 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-765623-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-765623 --entrypoint /usr/bin/test -v force-systemd-env-765623:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:25:18.196170  189119 oci.go:107] Successfully prepared a docker volume force-systemd-env-765623
	I1229 07:25:18.196225  189119 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:25:18.196236  189119 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:25:18.196304  189119 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-765623:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:25:23.616829  189119 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-765623:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (5.420478563s)
	I1229 07:25:23.616861  189119 kic.go:203] duration metric: took 5.420621308s to extract preloaded images to volume ...
	W1229 07:25:23.616986  189119 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:25:23.617110  189119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:25:23.691009  189119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-765623 --name force-systemd-env-765623 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-765623 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-765623 --network force-systemd-env-765623 --ip 192.168.76.2 --volume force-systemd-env-765623:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:25:24.058389  189119 cli_runner.go:164] Run: docker container inspect force-systemd-env-765623 --format={{.State.Running}}
	I1229 07:25:24.080065  189119 cli_runner.go:164] Run: docker container inspect force-systemd-env-765623 --format={{.State.Status}}
	I1229 07:25:24.112462  189119 cli_runner.go:164] Run: docker exec force-systemd-env-765623 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:25:24.180614  189119 oci.go:144] the created container "force-systemd-env-765623" has a running status.
	I1229 07:25:24.180640  189119 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa...
	I1229 07:25:24.959359  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:25:24.959411  189119 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:25:24.986657  189119 cli_runner.go:164] Run: docker container inspect force-systemd-env-765623 --format={{.State.Status}}
	I1229 07:25:25.030378  189119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:25:25.030399  189119 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-765623 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:25:25.092482  189119 cli_runner.go:164] Run: docker container inspect force-systemd-env-765623 --format={{.State.Status}}
	I1229 07:25:25.117383  189119 machine.go:94] provisionDockerMachine start ...
	I1229 07:25:25.117496  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:25.144624  189119 main.go:144] libmachine: Using SSH client type: native
	I1229 07:25:25.144981  189119 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1229 07:25:25.144991  189119 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:25:25.145939  189119 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50524->127.0.0.1:33013: read: connection reset by peer
	I1229 07:25:28.309205  189119 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-765623
	
	I1229 07:25:28.309234  189119 ubuntu.go:182] provisioning hostname "force-systemd-env-765623"
	I1229 07:25:28.309300  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:28.332683  189119 main.go:144] libmachine: Using SSH client type: native
	I1229 07:25:28.332998  189119 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1229 07:25:28.333016  189119 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-765623 && echo "force-systemd-env-765623" | sudo tee /etc/hostname
	I1229 07:25:28.508778  189119 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-765623
	
	I1229 07:25:28.508863  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:28.531602  189119 main.go:144] libmachine: Using SSH client type: native
	I1229 07:25:28.531917  189119 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1229 07:25:28.531939  189119 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-765623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-765623/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-765623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:25:28.689698  189119 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:25:28.689729  189119 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
	I1229 07:25:28.689771  189119 ubuntu.go:190] setting up certificates
	I1229 07:25:28.689783  189119 provision.go:84] configureAuth start
	I1229 07:25:28.689902  189119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-765623
	I1229 07:25:28.715458  189119 provision.go:143] copyHostCerts
	I1229 07:25:28.715502  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:25:28.715535  189119 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
	I1229 07:25:28.715546  189119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:25:28.715622  189119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
	I1229 07:25:28.715702  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:25:28.715724  189119 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
	I1229 07:25:28.715732  189119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:25:28.715766  189119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
	I1229 07:25:28.715809  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:25:28.715830  189119 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
	I1229 07:25:28.715840  189119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:25:28.715864  189119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
	I1229 07:25:28.715916  189119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-765623 san=[127.0.0.1 192.168.76.2 force-systemd-env-765623 localhost minikube]
	I1229 07:25:28.919053  189119 provision.go:177] copyRemoteCerts
	I1229 07:25:28.919131  189119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:25:28.919177  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:28.939761  189119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa Username:docker}
	I1229 07:25:29.052146  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:25:29.052229  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:25:29.080447  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:25:29.080520  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:25:29.108831  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:25:29.108934  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:25:29.136722  189119 provision.go:87] duration metric: took 446.904755ms to configureAuth
	I1229 07:25:29.136755  189119 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:25:29.136973  189119 config.go:182] Loaded profile config "force-systemd-env-765623": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:25:29.136988  189119 machine.go:97] duration metric: took 4.019588332s to provisionDockerMachine
	I1229 07:25:29.136996  189119 client.go:176] duration metric: took 11.730518286s to LocalClient.Create
	I1229 07:25:29.137010  189119 start.go:167] duration metric: took 11.730570677s to libmachine.API.Create "force-systemd-env-765623"
	I1229 07:25:29.137020  189119 start.go:293] postStartSetup for "force-systemd-env-765623" (driver="docker")
	I1229 07:25:29.137124  189119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:25:29.137212  189119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:25:29.137272  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:29.162197  189119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa Username:docker}
	I1229 07:25:29.274330  189119 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:25:29.278251  189119 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:25:29.278276  189119 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:25:29.278287  189119 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
	I1229 07:25:29.278342  189119 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
	I1229 07:25:29.278418  189119 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
	I1229 07:25:29.278425  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /etc/ssl/certs/43522.pem
	I1229 07:25:29.278524  189119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:25:29.288720  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:25:29.308406  189119 start.go:296] duration metric: took 171.28412ms for postStartSetup
	I1229 07:25:29.308782  189119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-765623
	I1229 07:25:29.329311  189119 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/config.json ...
	I1229 07:25:29.329585  189119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:25:29.329649  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:29.350997  189119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa Username:docker}
	I1229 07:25:29.458636  189119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:25:29.466346  189119 start.go:128] duration metric: took 12.063868547s to createHost
	I1229 07:25:29.466375  189119 start.go:83] releasing machines lock for "force-systemd-env-765623", held for 12.06399014s
	I1229 07:25:29.466447  189119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-765623
	I1229 07:25:29.486659  189119 ssh_runner.go:195] Run: cat /version.json
	I1229 07:25:29.486674  189119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:25:29.486713  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:29.486725  189119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-765623
	I1229 07:25:29.522513  189119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa Username:docker}
	I1229 07:25:29.528253  189119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-env-765623/id_rsa Username:docker}
	I1229 07:25:29.737563  189119 ssh_runner.go:195] Run: systemctl --version
	I1229 07:25:29.744563  189119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:25:29.751024  189119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:25:29.751112  189119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:25:29.784219  189119 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:25:29.784243  189119 start.go:496] detecting cgroup driver to use...
	I1229 07:25:29.784260  189119 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:25:29.784318  189119 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:25:29.804071  189119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:25:29.823666  189119 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:25:29.823739  189119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:25:29.841603  189119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:25:29.861420  189119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:25:30.015612  189119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:25:30.202681  189119 docker.go:234] disabling docker service ...
	I1229 07:25:30.202773  189119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:25:30.226812  189119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:25:30.242749  189119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:25:30.402999  189119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:25:30.540251  189119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:25:30.553084  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:25:30.569841  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:25:30.579094  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:25:30.588530  189119 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:25:30.588688  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:25:30.598379  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:25:30.607891  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:25:30.617450  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:25:30.626801  189119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:25:30.635718  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:25:30.645697  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:25:30.655429  189119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:25:30.665616  189119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:25:30.674656  189119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:25:30.683309  189119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:25:30.829836  189119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:25:31.094690  189119 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1229 07:25:31.094768  189119 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1229 07:25:31.114538  189119 start.go:574] Will wait 60s for crictl version
	I1229 07:25:31.114653  189119 ssh_runner.go:195] Run: which crictl
	I1229 07:25:31.131264  189119 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:25:31.242502  189119 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1229 07:25:31.242601  189119 ssh_runner.go:195] Run: containerd --version
	I1229 07:25:31.274295  189119 ssh_runner.go:195] Run: containerd --version
	I1229 07:25:31.310441  189119 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1229 07:25:31.315336  189119 cli_runner.go:164] Run: docker network inspect force-systemd-env-765623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:25:31.349225  189119 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:25:31.354405  189119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:25:31.366594  189119 kubeadm.go:884] updating cluster {Name:force-systemd-env-765623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-765623 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:25:31.366710  189119 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:25:31.366781  189119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:25:31.400256  189119 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:25:31.400280  189119 containerd.go:542] Images already preloaded, skipping extraction
	I1229 07:25:31.400339  189119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:25:31.434293  189119 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:25:31.434313  189119 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:25:31.434322  189119 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1229 07:25:31.434408  189119 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-765623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-765623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:25:31.434470  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1229 07:25:31.470391  189119 cni.go:84] Creating CNI manager for ""
	I1229 07:25:31.470410  189119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:25:31.470428  189119 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:25:31.470451  189119 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-765623 NodeName:force-systemd-env-765623 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:25:31.470569  189119 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-765623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:25:31.470636  189119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:25:31.479365  189119 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:25:31.479437  189119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:25:31.488706  189119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1229 07:25:31.504742  189119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:25:31.526113  189119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1229 07:25:31.542760  189119 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:25:31.551069  189119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:25:31.562626  189119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:25:31.816132  189119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:25:31.843189  189119 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623 for IP: 192.168.76.2
	I1229 07:25:31.843209  189119 certs.go:195] generating shared ca certs ...
	I1229 07:25:31.843226  189119 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:31.843360  189119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
	I1229 07:25:31.843406  189119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
	I1229 07:25:31.843413  189119 certs.go:257] generating profile certs ...
	I1229 07:25:31.843468  189119 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.key
	I1229 07:25:31.843479  189119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.crt with IP's: []
	I1229 07:25:32.668127  189119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.crt ...
	I1229 07:25:32.668156  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.crt: {Name:mkcd76331b5380938804e6e004014447ea108d11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:32.668397  189119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.key ...
	I1229 07:25:32.668409  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/client.key: {Name:mk1efe20d034db3d9706fde7a3e8f3af54084b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:32.668512  189119 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key.4fe07902
	I1229 07:25:32.668527  189119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt.4fe07902 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:25:33.323253  189119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt.4fe07902 ...
	I1229 07:25:33.323302  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt.4fe07902: {Name:mk8a6010e8aa0c9ef4e3918c0e816273334962c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:33.323484  189119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key.4fe07902 ...
	I1229 07:25:33.323503  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key.4fe07902: {Name:mkcacb859a2c5f6532d2830fe40cc55415f8c25f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:33.323575  189119 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt.4fe07902 -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt
	I1229 07:25:33.323658  189119 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key.4fe07902 -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key
	I1229 07:25:33.323720  189119 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.key
	I1229 07:25:33.323740  189119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.crt with IP's: []
	I1229 07:25:33.438439  189119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.crt ...
	I1229 07:25:33.438478  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.crt: {Name:mk75f4af392dd2dd56387a06d41b85ec5b377407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:33.438720  189119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.key ...
	I1229 07:25:33.438737  189119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.key: {Name:mk0df881c7b4568af9de28da2852803c719d8822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:25:33.438840  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:25:33.438866  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:25:33.438883  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:25:33.438902  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:25:33.438917  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:25:33.438929  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:25:33.438945  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:25:33.438956  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:25:33.439011  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
	W1229 07:25:33.439093  189119 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
	I1229 07:25:33.439106  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:25:33.439133  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:25:33.439162  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:25:33.439190  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
	I1229 07:25:33.439238  189119 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:25:33.439273  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:25:33.439289  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem -> /usr/share/ca-certificates/4352.pem
	I1229 07:25:33.439300  189119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /usr/share/ca-certificates/43522.pem
	I1229 07:25:33.439819  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:25:33.465419  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:25:33.486110  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:25:33.504018  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:25:33.522053  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:25:33.554345  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:25:33.571329  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:25:33.593825  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-env-765623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:25:33.613320  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:25:33.634375  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
	I1229 07:25:33.655733  189119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
	I1229 07:25:33.684185  189119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:25:33.699334  189119 ssh_runner.go:195] Run: openssl version
	I1229 07:25:33.712538  189119 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
	I1229 07:25:33.727615  189119 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
	I1229 07:25:33.736929  189119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
	I1229 07:25:33.745652  189119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
	I1229 07:25:33.745720  189119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
	I1229 07:25:33.797548  189119 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:25:33.805183  189119 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:25:33.812488  189119 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:25:33.820197  189119 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:25:33.827894  189119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:25:33.832094  189119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:25:33.832165  189119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:25:33.874085  189119 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:25:33.881777  189119 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:25:33.889137  189119 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
	I1229 07:25:33.896464  189119 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
	I1229 07:25:33.903743  189119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
	I1229 07:25:33.907762  189119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
	I1229 07:25:33.907827  189119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
	I1229 07:25:33.949151  189119 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:25:33.956518  189119 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
	I1229 07:25:33.963771  189119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:25:33.968197  189119 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:25:33.968246  189119 kubeadm.go:401] StartCluster: {Name:force-systemd-env-765623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-765623 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:25:33.968318  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1229 07:25:33.968373  189119 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:25:34.006085  189119 cri.go:96] found id: ""
	I1229 07:25:34.006184  189119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:25:34.016129  189119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:25:34.024392  189119 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:25:34.024460  189119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:25:34.034978  189119 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:25:34.034996  189119 kubeadm.go:158] found existing configuration files:
	
	I1229 07:25:34.035080  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:25:34.043538  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:25:34.043629  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:25:34.051231  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:25:34.059954  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:25:34.060056  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:25:34.067767  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:25:34.076070  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:25:34.076179  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:25:34.084129  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:25:34.093259  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:25:34.093354  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:25:34.101420  189119 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:25:34.152916  189119 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:25:34.153500  189119 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:25:34.332927  189119 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:25:34.332999  189119 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:25:34.333088  189119 kubeadm.go:319] OS: Linux
	I1229 07:25:34.333155  189119 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:25:34.333215  189119 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:25:34.333266  189119 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:25:34.333316  189119 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:25:34.333369  189119 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:25:34.333425  189119 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:25:34.333474  189119 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:25:34.333525  189119 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:25:34.333584  189119 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:25:34.414903  189119 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:25:34.415019  189119 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:25:34.415115  189119 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:25:34.421465  189119 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:25:34.428217  189119 out.go:252]   - Generating certificates and keys ...
	I1229 07:25:34.428315  189119 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:25:34.428381  189119 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:25:34.681304  189119 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:25:34.809406  189119 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:25:34.971746  189119 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:25:35.122292  189119 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:25:35.319897  189119 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:25:35.320271  189119 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:25:35.747156  189119 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:25:35.748196  189119 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:25:36.502123  189119 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:25:36.665431  189119 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:25:36.725408  189119 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:25:36.725495  189119 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:25:37.378591  189119 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:25:38.035342  189119 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:25:38.079853  189119 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:25:38.328204  189119 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:25:38.446189  189119 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:25:38.447508  189119 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:25:38.454870  189119 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:25:38.458726  189119 out.go:252]   - Booting up control plane ...
	I1229 07:25:38.458836  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:25:38.459201  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:25:38.460238  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:25:38.484102  189119 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:25:38.484219  189119 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:25:38.492397  189119 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:25:38.493201  189119 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:25:38.493618  189119 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:25:38.664254  189119 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:25:38.664375  189119 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:29:38.664661  189119 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000567935s
	I1229 07:29:38.664687  189119 kubeadm.go:319] 
	I1229 07:29:38.664744  189119 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:29:38.664777  189119 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:29:38.664882  189119 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:29:38.664887  189119 kubeadm.go:319] 
	I1229 07:29:38.664991  189119 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:29:38.665041  189119 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:29:38.665074  189119 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:29:38.665078  189119 kubeadm.go:319] 
	I1229 07:29:38.670250  189119 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:29:38.670768  189119 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:29:38.670921  189119 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:29:38.671274  189119 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:29:38.671311  189119 kubeadm.go:319] 
	I1229 07:29:38.671462  189119 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:29:38.671518  189119 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000567935s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-765623 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000567935s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:29:38.671628  189119 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1229 07:29:39.082796  189119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:29:39.096479  189119 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:29:39.096543  189119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:29:39.104646  189119 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:29:39.104707  189119 kubeadm.go:158] found existing configuration files:
	
	I1229 07:29:39.104766  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:29:39.112725  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:29:39.112791  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:29:39.120632  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:29:39.128558  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:29:39.128631  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:29:39.138701  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:29:39.147791  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:29:39.147862  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:29:39.155880  189119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:29:39.164117  189119 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:29:39.164229  189119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:29:39.172203  189119 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:29:39.219423  189119 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:29:39.219513  189119 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:29:39.296264  189119 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:29:39.296383  189119 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:29:39.296445  189119 kubeadm.go:319] OS: Linux
	I1229 07:29:39.296519  189119 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:29:39.296598  189119 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:29:39.296665  189119 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:29:39.296739  189119 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:29:39.296812  189119 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:29:39.296889  189119 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:29:39.296964  189119 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:29:39.297069  189119 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:29:39.297156  189119 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:29:39.372302  189119 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:29:39.372462  189119 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:29:39.372589  189119 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:29:39.377736  189119 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:29:39.383260  189119 out.go:252]   - Generating certificates and keys ...
	I1229 07:29:39.383406  189119 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:29:39.383500  189119 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:29:39.383594  189119 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:29:39.383682  189119 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:29:39.383773  189119 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:29:39.383848  189119 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:29:39.383938  189119 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:29:39.384020  189119 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:29:39.384156  189119 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:29:39.384259  189119 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:29:39.384350  189119 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:29:39.384459  189119 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:29:39.684652  189119 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:29:39.875788  189119 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:29:39.953249  189119 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:29:40.366349  189119 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:29:40.620314  189119 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:29:40.620902  189119 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:29:40.623808  189119 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:29:40.627092  189119 out.go:252]   - Booting up control plane ...
	I1229 07:29:40.627315  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:29:40.627445  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:29:40.627866  189119 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:29:40.650965  189119 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:29:40.651151  189119 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:29:40.659368  189119 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:29:40.659793  189119 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:29:40.660045  189119 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:29:40.795353  189119 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:29:40.795475  189119 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:33:40.796165  189119 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00118168s
	I1229 07:33:40.796197  189119 kubeadm.go:319] 
	I1229 07:33:40.796278  189119 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:33:40.796329  189119 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:33:40.796442  189119 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:33:40.796459  189119 kubeadm.go:319] 
	I1229 07:33:40.796574  189119 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:33:40.796609  189119 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:33:40.796640  189119 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:33:40.796645  189119 kubeadm.go:319] 
	I1229 07:33:40.801537  189119 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:33:40.801961  189119 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:33:40.802075  189119 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:33:40.802312  189119 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:33:40.802322  189119 kubeadm.go:319] 
	I1229 07:33:40.802391  189119 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:33:40.802455  189119 kubeadm.go:403] duration metric: took 8m6.834213823s to StartCluster
	I1229 07:33:40.802492  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:33:40.802558  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:33:40.827758  189119 cri.go:96] found id: ""
	I1229 07:33:40.827797  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.827807  189119 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:33:40.827813  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1229 07:33:40.827878  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:33:40.853341  189119 cri.go:96] found id: ""
	I1229 07:33:40.853366  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.853374  189119 logs.go:284] No container was found matching "etcd"
	I1229 07:33:40.853380  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1229 07:33:40.853441  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:33:40.878686  189119 cri.go:96] found id: ""
	I1229 07:33:40.878713  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.878722  189119 logs.go:284] No container was found matching "coredns"
	I1229 07:33:40.878729  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:33:40.878792  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:33:40.904946  189119 cri.go:96] found id: ""
	I1229 07:33:40.904967  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.904975  189119 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:33:40.904982  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:33:40.905067  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:33:40.931121  189119 cri.go:96] found id: ""
	I1229 07:33:40.931187  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.931198  189119 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:33:40.931205  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:33:40.931275  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:33:40.955520  189119 cri.go:96] found id: ""
	I1229 07:33:40.955597  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.955622  189119 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:33:40.955640  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1229 07:33:40.955714  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:33:40.980927  189119 cri.go:96] found id: ""
	I1229 07:33:40.980949  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.980958  189119 logs.go:284] No container was found matching "kindnet"
	I1229 07:33:40.980968  189119 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:33:40.980980  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:33:41.051541  189119 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:33:41.042975    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.043723    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045388    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045886    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.047448    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:33:41.042975    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.043723    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045388    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045886    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.047448    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:33:41.051565  189119 logs.go:123] Gathering logs for containerd ...
	I1229 07:33:41.051584  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1229 07:33:41.090165  189119 logs.go:123] Gathering logs for container status ...
	I1229 07:33:41.090197  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:33:41.117344  189119 logs.go:123] Gathering logs for kubelet ...
	I1229 07:33:41.117371  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:33:41.177837  189119 logs.go:123] Gathering logs for dmesg ...
	I1229 07:33:41.177878  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1229 07:33:41.226068  189119 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:33:41.226130  189119 out.go:285] * 
	* 
	W1229 07:33:41.226329  189119 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:33:41.226351  189119 out.go:285] * 
	* 
	W1229 07:33:41.226700  189119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:33:41.232111  189119 out.go:203] 
	W1229 07:33:41.235821  189119 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:33:41.235906  189119 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:33:41.236234  189119 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:33:41.239328  189119 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-765623 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-765623 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-29 07:33:41.662753105 +0000 UTC m=+2842.283230576
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-765623
helpers_test.go:244: (dbg) docker inspect force-systemd-env-765623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5",
	        "Created": "2025-12-29T07:25:23.713090692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:25:23.798548677Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5/hostname",
	        "HostsPath": "/var/lib/docker/containers/420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5/hosts",
	        "LogPath": "/var/lib/docker/containers/420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5/420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5-json.log",
	        "Name": "/force-systemd-env-765623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-765623:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-765623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "420f1e4784282d4a3e9859bbf1d11ae77d5e0a03809ed2aafd9fe80591db7ca5",
	                "LowerDir": "/var/lib/docker/overlay2/0a0154766c23af3b1d517329f97c520f25236f527241aa81e36aefe53a811218-init/diff:/var/lib/docker/overlay2/54d5b7cffc5e9463f8f08189f8469b00e160a6e6f01791a5d6d8fd2d4f288a08/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a0154766c23af3b1d517329f97c520f25236f527241aa81e36aefe53a811218/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a0154766c23af3b1d517329f97c520f25236f527241aa81e36aefe53a811218/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a0154766c23af3b1d517329f97c520f25236f527241aa81e36aefe53a811218/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-765623",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-765623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-765623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-765623",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-765623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edee1f2115aa14f32c83ab0bde1c33066c91104208c325945c379c47d8f920d2",
	            "SandboxKey": "/var/run/docker/netns/edee1f2115aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-765623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:cd:19:d1:20:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c78f904b7647a7db49618de74e2fdd9aecbbaeb4c2b1ef673bcf13703bb8f7d2",
	                    "EndpointID": "9fa54a0a8f420b94e90249cddb6fde056f711ae26f8c2ba236afc6ee9136afb4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-765623",
	                        "420f1e478428"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-765623 -n force-systemd-env-765623
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-765623 -n force-systemd-env-765623: exit status 6 (386.521494ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:33:42.048772  213856 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-765623" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-765623 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-343069 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl cat docker --no-pager                                                                             │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo docker system info                                                                                          │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cri-dockerd --version                                                                                       │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl cat containerd --no-pager                                                                         │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo cat /etc/containerd/config.toml                                                                             │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo containerd config dump                                                                                      │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo systemctl cat crio --no-pager                                                                               │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ ssh     │ -p cilium-343069 sudo crio config                                                                                                 │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │                     │
	│ delete  │ -p cilium-343069                                                                                                                  │ cilium-343069             │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │ 29 Dec 25 07:28 UTC │
	│ start   │ -p cert-expiration-688553 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-688553    │ jenkins │ v1.37.0 │ 29 Dec 25 07:28 UTC │ 29 Dec 25 07:28 UTC │
	│ start   │ -p cert-expiration-688553 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-688553    │ jenkins │ v1.37.0 │ 29 Dec 25 07:31 UTC │ 29 Dec 25 07:31 UTC │
	│ delete  │ -p cert-expiration-688553                                                                                                         │ cert-expiration-688553    │ jenkins │ v1.37.0 │ 29 Dec 25 07:31 UTC │ 29 Dec 25 07:31 UTC │
	│ start   │ -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-275936 │ jenkins │ v1.37.0 │ 29 Dec 25 07:31 UTC │                     │
	│ ssh     │ force-systemd-env-765623 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-765623  │ jenkins │ v1.37.0 │ 29 Dec 25 07:33 UTC │ 29 Dec 25 07:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:31:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:31:54.990118  210456 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:31:54.990303  210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:31:54.990335  210456 out.go:374] Setting ErrFile to fd 2...
	I1229 07:31:54.990355  210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:31:54.990732  210456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:31:54.991290  210456 out.go:368] Setting JSON to false
	I1229 07:31:54.992670  210456 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4466,"bootTime":1766989049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 07:31:54.992775  210456 start.go:143] virtualization:  
	I1229 07:31:54.999193  210456 out.go:179] * [force-systemd-flag-275936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:31:55.014374  210456 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:31:55.014524  210456 notify.go:221] Checking for updates...
	I1229 07:31:55.021978  210456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:31:55.025445  210456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:31:55.028900  210456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 07:31:55.032526  210456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:31:55.035779  210456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:31:55.039716  210456 config.go:182] Loaded profile config "force-systemd-env-765623": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:31:55.039858  210456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:31:55.062289  210456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:31:55.062411  210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:31:55.126864  210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.117265138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:31:55.126971  210456 docker.go:319] overlay module found
	I1229 07:31:55.130288  210456 out.go:179] * Using the docker driver based on user configuration
	I1229 07:31:55.133429  210456 start.go:309] selected driver: docker
	I1229 07:31:55.133455  210456 start.go:928] validating driver "docker" against <nil>
	I1229 07:31:55.133470  210456 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:31:55.134222  210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:31:55.189237  210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.17992811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:31:55.189389  210456 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:31:55.189601  210456 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:31:55.192735  210456 out.go:179] * Using Docker driver with root privileges
	I1229 07:31:55.195689  210456 cni.go:84] Creating CNI manager for ""
	I1229 07:31:55.195764  210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:31:55.195784  210456 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:31:55.195864  210456 start.go:353] cluster config:
	{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1229 07:31:55.199033  210456 out.go:179] * Starting "force-systemd-flag-275936" primary control-plane node in "force-systemd-flag-275936" cluster
	I1229 07:31:55.201990  210456 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1229 07:31:55.205087  210456 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:31:55.208135  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:31:55.208186  210456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1229 07:31:55.208196  210456 cache.go:65] Caching tarball of preloaded images
	I1229 07:31:55.208228  210456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:31:55.208280  210456 preload.go:251] Found /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:31:55.208290  210456 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1229 07:31:55.208394  210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
	I1229 07:31:55.208411  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json: {Name:mkce2701c5739928b2701138ece40a77f13e0afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:31:55.235557  210456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:31:55.235583  210456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:31:55.235603  210456 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:31:55.235641  210456 start.go:360] acquireMachinesLock for force-systemd-flag-275936: {Name:mkc1ff8fd971687527ddb66e30c065b7dec5d125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:31:55.235763  210456 start.go:364] duration metric: took 102.705µs to acquireMachinesLock for "force-systemd-flag-275936"
	I1229 07:31:55.235792  210456 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1229 07:31:55.235867  210456 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:31:55.239336  210456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:31:55.239605  210456 start.go:159] libmachine.API.Create for "force-systemd-flag-275936" (driver="docker")
	I1229 07:31:55.239645  210456 client.go:173] LocalClient.Create starting
	I1229 07:31:55.239732  210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
	I1229 07:31:55.239774  210456 main.go:144] libmachine: Decoding PEM data...
	I1229 07:31:55.239790  210456 main.go:144] libmachine: Parsing certificate...
	I1229 07:31:55.239844  210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
	I1229 07:31:55.239866  210456 main.go:144] libmachine: Decoding PEM data...
	I1229 07:31:55.239877  210456 main.go:144] libmachine: Parsing certificate...
	I1229 07:31:55.240246  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:31:55.259118  210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:31:55.259228  210456 network_create.go:284] running [docker network inspect force-systemd-flag-275936] to gather additional debugging logs...
	I1229 07:31:55.259249  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936
	W1229 07:31:55.275676  210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 returned with exit code 1
	I1229 07:31:55.275729  210456 network_create.go:287] error running [docker network inspect force-systemd-flag-275936]: docker network inspect force-systemd-flag-275936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-275936 not found
	I1229 07:31:55.275743  210456 network_create.go:289] output of [docker network inspect force-systemd-flag-275936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-275936 not found
	
	** /stderr **
	I1229 07:31:55.275852  210456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:31:55.295712  210456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
	I1229 07:31:55.296163  210456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
	I1229 07:31:55.296569  210456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
	I1229 07:31:55.297004  210456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c78f904b7647 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:23:10:63:16:dd} reservation:<nil>}
	I1229 07:31:55.297525  210456 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4b020}
	I1229 07:31:55.297549  210456 network_create.go:124] attempt to create docker network force-systemd-flag-275936 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:31:55.297626  210456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-275936 force-systemd-flag-275936
	I1229 07:31:55.356469  210456 network_create.go:108] docker network force-systemd-flag-275936 192.168.85.0/24 created
	I1229 07:31:55.356503  210456 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-275936" container
	I1229 07:31:55.356596  210456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:31:55.372634  210456 cli_runner.go:164] Run: docker volume create force-systemd-flag-275936 --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:31:55.390334  210456 oci.go:103] Successfully created a docker volume force-systemd-flag-275936
	I1229 07:31:55.390428  210456 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-275936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --entrypoint /usr/bin/test -v force-systemd-flag-275936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:31:55.963123  210456 oci.go:107] Successfully prepared a docker volume force-systemd-flag-275936
	I1229 07:31:55.963188  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:31:55.963199  210456 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:31:55.963282  210456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:31:59.824384  210456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.86104634s)
	I1229 07:31:59.824418  210456 kic.go:203] duration metric: took 3.861215926s to extract preloaded images to volume ...
	W1229 07:31:59.824564  210456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:31:59.824685  210456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:31:59.876072  210456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-275936 --name force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-275936 --network force-systemd-flag-275936 --ip 192.168.85.2 --volume force-systemd-flag-275936:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:32:00.556829  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Running}}
	I1229 07:32:00.579290  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:00.610624  210456 cli_runner.go:164] Run: docker exec force-systemd-flag-275936 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:32:00.666102  210456 oci.go:144] the created container "force-systemd-flag-275936" has a running status.
	I1229 07:32:00.666144  210456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa...
	I1229 07:32:00.928093  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:32:00.928158  210456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:32:00.955575  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:00.978812  210456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:32:00.978832  210456 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-275936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:32:01.046827  210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
	I1229 07:32:01.063890  210456 machine.go:94] provisionDockerMachine start ...
	I1229 07:32:01.063978  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:01.083021  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:01.083355  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:01.083364  210456 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:32:01.084071  210456 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:32:04.237095  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
	
	I1229 07:32:04.237134  210456 ubuntu.go:182] provisioning hostname "force-systemd-flag-275936"
	I1229 07:32:04.237227  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:04.256216  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:04.256528  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:04.256544  210456 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-275936 && echo "force-systemd-flag-275936" | sudo tee /etc/hostname
	I1229 07:32:04.418929  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
	
	I1229 07:32:04.419007  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:04.446717  210456 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:04.447036  210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1229 07:32:04.447059  210456 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-275936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-275936/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-275936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:32:04.609426  210456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:32:04.609457  210456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
	I1229 07:32:04.609486  210456 ubuntu.go:190] setting up certificates
	I1229 07:32:04.609501  210456 provision.go:84] configureAuth start
	I1229 07:32:04.609566  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:04.626388  210456 provision.go:143] copyHostCerts
	I1229 07:32:04.626430  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:32:04.626466  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
	I1229 07:32:04.626484  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
	I1229 07:32:04.626565  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
	I1229 07:32:04.626654  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:32:04.626677  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
	I1229 07:32:04.626681  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
	I1229 07:32:04.626716  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
	I1229 07:32:04.626772  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:32:04.626794  210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
	I1229 07:32:04.626799  210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
	I1229 07:32:04.626833  210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
	I1229 07:32:04.626893  210456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-275936 san=[127.0.0.1 192.168.85.2 force-systemd-flag-275936 localhost minikube]
	I1229 07:32:05.170037  210456 provision.go:177] copyRemoteCerts
	I1229 07:32:05.170107  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:32:05.170157  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.198376  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.304972  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:32:05.305054  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:32:05.323515  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:32:05.323579  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:32:05.342427  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:32:05.342499  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:32:05.360800  210456 provision.go:87] duration metric: took 751.283522ms to configureAuth
	I1229 07:32:05.360827  210456 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:32:05.361018  210456 config.go:182] Loaded profile config "force-systemd-flag-275936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:32:05.361047  210456 machine.go:97] duration metric: took 4.297134989s to provisionDockerMachine
	I1229 07:32:05.361055  210456 client.go:176] duration metric: took 10.12140189s to LocalClient.Create
	I1229 07:32:05.361075  210456 start.go:167] duration metric: took 10.121472807s to libmachine.API.Create "force-systemd-flag-275936"
	I1229 07:32:05.361083  210456 start.go:293] postStartSetup for "force-systemd-flag-275936" (driver="docker")
	I1229 07:32:05.361091  210456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:32:05.361147  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:32:05.361185  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.380875  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.485408  210456 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:32:05.489100  210456 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:32:05.489170  210456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:32:05.489195  210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
	I1229 07:32:05.489255  210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
	I1229 07:32:05.489343  210456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
	I1229 07:32:05.489355  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /etc/ssl/certs/43522.pem
	I1229 07:32:05.489461  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:32:05.497396  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:32:05.515749  210456 start.go:296] duration metric: took 154.652975ms for postStartSetup
	I1229 07:32:05.516127  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:05.533819  210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
	I1229 07:32:05.534100  210456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:32:05.534159  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.551565  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.654403  210456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:32:05.659341  210456 start.go:128] duration metric: took 10.423458394s to createHost
	I1229 07:32:05.659375  210456 start.go:83] releasing machines lock for "force-systemd-flag-275936", held for 10.423592738s
	I1229 07:32:05.659448  210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
	I1229 07:32:05.678420  210456 ssh_runner.go:195] Run: cat /version.json
	I1229 07:32:05.678492  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.678576  210456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:32:05.678642  210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
	I1229 07:32:05.697110  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.710766  210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
	I1229 07:32:05.800849  210456 ssh_runner.go:195] Run: systemctl --version
	I1229 07:32:05.906106  210456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:32:05.913794  210456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:32:05.913886  210456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:32:05.943408  210456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:32:05.943429  210456 start.go:496] detecting cgroup driver to use...
	I1229 07:32:05.943443  210456 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:32:05.943498  210456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1229 07:32:05.960297  210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:32:05.975696  210456 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:32:05.975754  210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:32:05.997010  210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:32:06.022997  210456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:32:06.148117  210456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:32:06.280642  210456 docker.go:234] disabling docker service ...
	I1229 07:32:06.280756  210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:32:06.304036  210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:32:06.318700  210456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:32:06.443465  210456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:32:06.572584  210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:32:06.586444  210456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:32:06.602103  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:32:06.611453  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:32:06.620606  210456 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:32:06.620725  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:32:06.630240  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:32:06.639541  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:32:06.649286  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:32:06.658362  210456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:32:06.667478  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:32:06.677469  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:32:06.687174  210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:32:06.696948  210456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:32:06.705434  210456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:32:06.713593  210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:32:06.830071  210456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:32:06.972284  210456 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1229 07:32:06.972372  210456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1229 07:32:06.976442  210456 start.go:574] Will wait 60s for crictl version
	I1229 07:32:06.976556  210456 ssh_runner.go:195] Run: which crictl
	I1229 07:32:06.980543  210456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:32:07.009695  210456 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1229 07:32:07.009824  210456 ssh_runner.go:195] Run: containerd --version
	I1229 07:32:07.032066  210456 ssh_runner.go:195] Run: containerd --version
	I1229 07:32:07.059211  210456 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1229 07:32:07.062242  210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:32:07.079092  210456 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:32:07.083157  210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:32:07.093628  210456 kubeadm.go:884] updating cluster {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:32:07.093752  210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1229 07:32:07.093832  210456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:32:07.119407  210456 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:32:07.119431  210456 containerd.go:542] Images already preloaded, skipping extraction
	I1229 07:32:07.119497  210456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:32:07.144660  210456 containerd.go:635] all images are preloaded for containerd runtime.
	I1229 07:32:07.144737  210456 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:32:07.144759  210456 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1229 07:32:07.144898  210456 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-275936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:32:07.144994  210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1229 07:32:07.174108  210456 cni.go:84] Creating CNI manager for ""
	I1229 07:32:07.174131  210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 07:32:07.174152  210456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:32:07.174176  210456 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-275936 NodeName:force-systemd-flag-275936 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:32:07.174301  210456 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-275936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:32:07.174374  210456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:32:07.182508  210456 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:32:07.182591  210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:32:07.190487  210456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1229 07:32:07.203868  210456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:32:07.217157  210456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1229 07:32:07.229905  210456 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:32:07.233686  210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:32:07.243649  210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:32:07.352826  210456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:32:07.369694  210456 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936 for IP: 192.168.85.2
	I1229 07:32:07.369715  210456 certs.go:195] generating shared ca certs ...
	I1229 07:32:07.369731  210456 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.369899  210456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
	I1229 07:32:07.369954  210456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
	I1229 07:32:07.369966  210456 certs.go:257] generating profile certs ...
	I1229 07:32:07.370034  210456 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key
	I1229 07:32:07.370051  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt with IP's: []
	I1229 07:32:07.651508  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt ...
	I1229 07:32:07.651543  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt: {Name:mkc96444933691c9c7712e10522774b7837acc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.651739  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key ...
	I1229 07:32:07.651754  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key: {Name:mk42aa340448fdd8ef54b06b419e1bc9521849ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.651848  210456 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f
	I1229 07:32:07.651868  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:32:07.848324  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f ...
	I1229 07:32:07.848363  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f: {Name:mk68515dedc39c6aa92cea4b93fb1d928671a1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.848540  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f ...
	I1229 07:32:07.848554  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f: {Name:mk54be500c1ee65f80b3e1b34359ca9c53176eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.848636  210456 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt
	I1229 07:32:07.848714  210456 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key
	I1229 07:32:07.848778  210456 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key
	I1229 07:32:07.848799  210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt with IP's: []
	I1229 07:32:07.938444  210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt ...
	I1229 07:32:07.938479  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt: {Name:mkdba6db08e3be7cf95db626fb2a49fc799397bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.938677  210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key ...
	I1229 07:32:07.938695  210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key: {Name:mk5ae15bf1e8cecd3236539da010f90c7a6ecc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:07.938805  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:32:07.938827  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:32:07.938845  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:32:07.938871  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:32:07.938889  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:32:07.938912  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:32:07.938935  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:32:07.938954  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:32:07.939035  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
	W1229 07:32:07.939082  210456 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
	I1229 07:32:07.939096  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:32:07.939131  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:32:07.939161  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:32:07.939189  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
	I1229 07:32:07.939241  210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
	I1229 07:32:07.939275  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /usr/share/ca-certificates/43522.pem
	I1229 07:32:07.939292  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:07.939303  210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem -> /usr/share/ca-certificates/4352.pem
	I1229 07:32:07.939851  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:32:07.959114  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:32:07.979433  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:32:07.999407  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:32:08.025146  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:32:08.044851  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:32:08.064128  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:32:08.082920  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:32:08.101321  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
	I1229 07:32:08.120009  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:32:08.138483  210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
	I1229 07:32:08.156729  210456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:32:08.171909  210456 ssh_runner.go:195] Run: openssl version
	I1229 07:32:08.195409  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.211936  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
	I1229 07:32:08.230603  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.242069  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.242186  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
	I1229 07:32:08.291052  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:32:08.298917  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:32:08.306719  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.314431  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:32:08.322022  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.325798  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.325862  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:32:08.366998  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:32:08.374640  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:32:08.382089  210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.389623  210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
	I1229 07:32:08.397153  210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.400818  210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.400884  210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
	I1229 07:32:08.442132  210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:32:08.450035  210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
	I1229 07:32:08.457563  210456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:32:08.461327  210456 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:32:08.461398  210456 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:32:08.461473  210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1229 07:32:08.461536  210456 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:32:08.487727  210456 cri.go:96] found id: ""
	I1229 07:32:08.487799  210456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:32:08.496267  210456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:32:08.504412  210456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:32:08.504475  210456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:32:08.512558  210456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:32:08.512581  210456 kubeadm.go:158] found existing configuration files:
	
	I1229 07:32:08.512658  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:32:08.521258  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:32:08.521347  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:32:08.529140  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:32:08.537528  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:32:08.537643  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:32:08.545668  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:32:08.554110  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:32:08.554178  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:32:08.562121  210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:32:08.570646  210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:32:08.570735  210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:32:08.578306  210456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:32:08.621067  210456 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:32:08.621176  210456 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:32:08.701369  210456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:32:08.701449  210456 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:32:08.701490  210456 kubeadm.go:319] OS: Linux
	I1229 07:32:08.701540  210456 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:32:08.701591  210456 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:32:08.701642  210456 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:32:08.701717  210456 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:32:08.701769  210456 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:32:08.701820  210456 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:32:08.701869  210456 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:32:08.701919  210456 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:32:08.701970  210456 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:32:08.775505  210456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:32:08.775618  210456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:32:08.775723  210456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:32:08.781821  210456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:32:08.788503  210456 out.go:252]   - Generating certificates and keys ...
	I1229 07:32:08.788612  210456 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:32:08.788684  210456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:32:09.057098  210456 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:32:09.418697  210456 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:32:09.572406  210456 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:32:09.643544  210456 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:32:10.339592  210456 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:32:10.339844  210456 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:32:10.482674  210456 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:32:10.483213  210456 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:32:10.795512  210456 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:32:10.975588  210456 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:32:11.248756  210456 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:32:11.248853  210456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:32:11.450295  210456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:32:11.719139  210456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:32:11.898464  210456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:32:12.299659  210456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:32:12.511471  210456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:32:12.512244  210456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:32:12.515181  210456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:32:12.518939  210456 out.go:252]   - Booting up control plane ...
	I1229 07:32:12.519048  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:32:12.519127  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:32:12.519194  210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:32:12.536415  210456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:32:12.536828  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:32:12.543945  210456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:32:12.544276  210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:32:12.544449  210456 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:32:12.687864  210456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:32:12.687987  210456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:33:40.796165  189119 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00118168s
	I1229 07:33:40.796197  189119 kubeadm.go:319] 
	I1229 07:33:40.796278  189119 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:33:40.796329  189119 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:33:40.796442  189119 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:33:40.796459  189119 kubeadm.go:319] 
	I1229 07:33:40.796574  189119 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:33:40.796609  189119 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:33:40.796640  189119 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:33:40.796645  189119 kubeadm.go:319] 
	I1229 07:33:40.801537  189119 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:33:40.801961  189119 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:33:40.802075  189119 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:33:40.802312  189119 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:33:40.802322  189119 kubeadm.go:319] 
	I1229 07:33:40.802391  189119 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:33:40.802455  189119 kubeadm.go:403] duration metric: took 8m6.834213823s to StartCluster
	I1229 07:33:40.802492  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:33:40.802558  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:33:40.827758  189119 cri.go:96] found id: ""
	I1229 07:33:40.827797  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.827807  189119 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:33:40.827813  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1229 07:33:40.827878  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:33:40.853341  189119 cri.go:96] found id: ""
	I1229 07:33:40.853366  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.853374  189119 logs.go:284] No container was found matching "etcd"
	I1229 07:33:40.853380  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1229 07:33:40.853441  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:33:40.878686  189119 cri.go:96] found id: ""
	I1229 07:33:40.878713  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.878722  189119 logs.go:284] No container was found matching "coredns"
	I1229 07:33:40.878729  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:33:40.878792  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:33:40.904946  189119 cri.go:96] found id: ""
	I1229 07:33:40.904967  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.904975  189119 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:33:40.904982  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:33:40.905067  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:33:40.931121  189119 cri.go:96] found id: ""
	I1229 07:33:40.931187  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.931198  189119 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:33:40.931205  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:33:40.931275  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:33:40.955520  189119 cri.go:96] found id: ""
	I1229 07:33:40.955597  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.955622  189119 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:33:40.955640  189119 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1229 07:33:40.955714  189119 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:33:40.980927  189119 cri.go:96] found id: ""
	I1229 07:33:40.980949  189119 logs.go:282] 0 containers: []
	W1229 07:33:40.980958  189119 logs.go:284] No container was found matching "kindnet"
	I1229 07:33:40.980968  189119 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:33:40.980980  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:33:41.051541  189119 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:33:41.042975    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.043723    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045388    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045886    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.047448    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:33:41.042975    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.043723    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045388    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.045886    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:41.047448    4865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:33:41.051565  189119 logs.go:123] Gathering logs for containerd ...
	I1229 07:33:41.051584  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1229 07:33:41.090165  189119 logs.go:123] Gathering logs for container status ...
	I1229 07:33:41.090197  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:33:41.117344  189119 logs.go:123] Gathering logs for kubelet ...
	I1229 07:33:41.117371  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:33:41.177837  189119 logs.go:123] Gathering logs for dmesg ...
	I1229 07:33:41.177878  189119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1229 07:33:41.226068  189119 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:33:41.226130  189119 out.go:285] * 
	W1229 07:33:41.226329  189119 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:33:41.226351  189119 out.go:285] * 
	W1229 07:33:41.226700  189119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:33:41.232111  189119 out.go:203] 
	W1229 07:33:41.235821  189119 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:33:41.235906  189119 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:33:41.236234  189119 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:33:41.239328  189119 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920408591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920503189Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920605663Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920680979Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920744692Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920809907Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.920975562Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.921183998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.921273025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.921372037Z" level=info msg="Connect containerd service"
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.921826941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.922603202Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.942057549Z" level=info msg="Start subscribing containerd event"
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.942278301Z" level=info msg="Start recovering state"
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.943915944Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 29 07:25:30 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:30.944138739Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.080827554Z" level=info msg="Start event monitor"
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081017308Z" level=info msg="Start cni network conf syncer for default"
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081178868Z" level=info msg="Start streaming server"
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081752895Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081828802Z" level=info msg="runtime interface starting up..."
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081891744Z" level=info msg="starting plugins..."
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.081968257Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:25:31 force-systemd-env-765623 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 29 07:25:31 force-systemd-env-765623 containerd[758]: time="2025-12-29T07:25:31.097589112Z" level=info msg="containerd successfully booted in 0.213623s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:33:42.777951    5003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:42.778545    5003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:42.780266    5003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:42.780760    5003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:33:42.782398    5003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec29 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014780] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.558389] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034938] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.769839] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.300699] kauditd_printk_skb: 39 callbacks suppressed
	[Dec29 07:00] hrtimer: interrupt took 19167915 ns
	
	
	==> kernel <==
	 07:33:42 up  1:16,  0 user,  load average: 0.45, 1.51, 2.21
	Linux force-systemd-env-765623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:33:39 force-systemd-env-765623 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:33:40 force-systemd-env-765623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 29 07:33:40 force-systemd-env-765623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:40 force-systemd-env-765623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:40 force-systemd-env-765623 kubelet[4798]: E1229 07:33:40.489867    4798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:33:40 force-systemd-env-765623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:33:40 force-systemd-env-765623 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:41 force-systemd-env-765623 kubelet[4885]: E1229 07:33:41.264953    4885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:41 force-systemd-env-765623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:42 force-systemd-env-765623 kubelet[4916]: E1229 07:33:42.020237    4916 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:33:42 force-systemd-env-765623 kubelet[5008]: E1229 07:33:42.760575    5008 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:33:42 force-systemd-env-765623 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-765623 -n force-systemd-env-765623
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-765623 -n force-systemd-env-765623: exit status 6 (374.272541ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:33:43.262246  214089 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-765623" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-765623" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-765623" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-765623
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-765623: (2.222202805s)
--- FAIL: TestForceSystemdEnv (508.49s)

                                                
                                    

Test pass (305/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.35
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.5
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 119.67
29 TestAddons/serial/Volcano 45.57
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.91
35 TestAddons/parallel/Registry 15.5
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 17.71
38 TestAddons/parallel/InspektorGadget 11.76
39 TestAddons/parallel/MetricsServer 5.77
41 TestAddons/parallel/CSI 43.06
42 TestAddons/parallel/Headlamp 16.77
43 TestAddons/parallel/CloudSpanner 5.64
44 TestAddons/parallel/LocalPath 53.58
45 TestAddons/parallel/NvidiaDevicePlugin 5.56
46 TestAddons/parallel/Yakd 11.97
48 TestAddons/StoppedEnableDisable 12.32
49 TestCertOptions 30.93
50 TestCertExpiration 215.69
54 TestDockerEnvContainerd 42.08
58 TestErrorSpam/setup 28.71
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.18
61 TestErrorSpam/pause 1.71
62 TestErrorSpam/unpause 1.97
63 TestErrorSpam/stop 1.7
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 46.01
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.19
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
75 TestFunctional/serial/CacheCmd/cache/add_local 1.19
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 49.24
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.16
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 8.2
91 TestFunctional/parallel/DryRun 0.55
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 7.61
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 21.9
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 2.21
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 8.76
130 TestFunctional/parallel/ServiceCmd/List 0.59
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
133 TestFunctional/parallel/ServiceCmd/Format 0.44
134 TestFunctional/parallel/ServiceCmd/URL 0.41
135 TestFunctional/parallel/MountCmd/specific-port 2.2
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.98
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.31
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.18
144 TestFunctional/parallel/ImageCommands/Setup 0.68
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 166.73
163 TestMultiControlPlane/serial/DeployApp 7.24
164 TestMultiControlPlane/serial/PingHostFromPods 1.61
165 TestMultiControlPlane/serial/AddWorkerNode 30.12
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
168 TestMultiControlPlane/serial/CopyFile 20.18
169 TestMultiControlPlane/serial/StopSecondaryNode 13.03
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 16.38
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.23
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.72
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.46
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.85
176 TestMultiControlPlane/serial/StopCluster 36.54
177 TestMultiControlPlane/serial/RestartCluster 61.15
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 51.29
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
185 TestJSONOutput/start/Command 46.57
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.45
210 TestKicCustomNetwork/create_custom_network 33.54
211 TestKicCustomNetwork/use_default_bridge_network 31.48
212 TestKicExistingNetwork 30.1
213 TestKicCustomSubnet 29.66
214 TestKicStaticIP 31.26
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 59.91
219 TestMountStart/serial/StartWithMountFirst 8.36
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.54
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.33
226 TestMountStart/serial/RestartStopped 7.7
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 75.25
231 TestMultiNode/serial/DeployApp2Nodes 4.89
232 TestMultiNode/serial/PingHostFrom2Pods 0.96
233 TestMultiNode/serial/AddNode 27.73
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.76
236 TestMultiNode/serial/CopyFile 10.77
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.11
239 TestMultiNode/serial/RestartKeepsNodes 80.71
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.13
242 TestMultiNode/serial/RestartMultiNode 55.21
243 TestMultiNode/serial/ValidateNameConflict 30.47
250 TestScheduledStopUnix 103.22
253 TestInsufficientStorage 12.75
254 TestRunningBinaryUpgrade 73.73
256 TestKubernetesUpgrade 334.41
257 TestMissingContainerUpgrade 156.58
259 TestPause/serial/Start 54.36
260 TestPause/serial/SecondStartNoReconfiguration 8.22
261 TestPause/serial/Pause 0.88
262 TestPause/serial/VerifyStatus 0.41
263 TestPause/serial/Unpause 0.83
264 TestPause/serial/PauseAgain 0.99
265 TestPause/serial/DeletePaused 3.47
266 TestPause/serial/VerifyDeletedResources 0.21
267 TestStoppedBinaryUpgrade/Setup 1.59
268 TestStoppedBinaryUpgrade/Upgrade 305.44
269 TestStoppedBinaryUpgrade/MinikubeLogs 3.08
277 TestPreload/Start-NoPreload-PullImage 66.24
278 TestPreload/Restart-With-Preload-Check-User-Image 47.74
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
282 TestNoKubernetes/serial/StartWithK8s 27.39
283 TestNoKubernetes/serial/StartWithStopK8s 16.33
284 TestNoKubernetes/serial/Start 7.8
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
287 TestNoKubernetes/serial/ProfileList 1.02
288 TestNoKubernetes/serial/Stop 1.34
289 TestNoKubernetes/serial/StartNoArgs 6.97
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
298 TestNetworkPlugins/group/false 3.61
303 TestStartStop/group/old-k8s-version/serial/FirstStart 60.86
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
306 TestStartStop/group/old-k8s-version/serial/Stop 12.14
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/old-k8s-version/serial/SecondStart 51.13
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
312 TestStartStop/group/old-k8s-version/serial/Pause 3.16
314 TestStartStop/group/embed-certs/serial/FirstStart 48.5
315 TestStartStop/group/embed-certs/serial/DeployApp 9.32
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
317 TestStartStop/group/embed-certs/serial/Stop 12.14
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/embed-certs/serial/SecondStart 51.4
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/embed-certs/serial/Pause 3.15
325 TestStartStop/group/no-preload/serial/FirstStart 52.43
326 TestStartStop/group/no-preload/serial/DeployApp 10.33
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
328 TestStartStop/group/no-preload/serial/Stop 12.62
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.05
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
332 TestStartStop/group/no-preload/serial/SecondStart 54.17
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
339 TestStartStop/group/no-preload/serial/Pause 3.48
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.84
343 TestStartStop/group/newest-cni/serial/FirstStart 37.8
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
346 TestStartStop/group/newest-cni/serial/Stop 1.58
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/newest-cni/serial/SecondStart 15.05
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
353 TestStartStop/group/newest-cni/serial/Pause 3.15
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
355 TestPreload/PreloadSrc/gcs 4.46
356 TestPreload/PreloadSrc/github 6.19
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.23
359 TestPreload/PreloadSrc/gcs-cached 1.04
360 TestNetworkPlugins/group/auto/Start 52.36
361 TestNetworkPlugins/group/kindnet/Start 47.79
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/auto/KubeletFlags 0.32
364 TestNetworkPlugins/group/auto/NetCatPod 10.29
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
366 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
367 TestNetworkPlugins/group/auto/DNS 0.19
368 TestNetworkPlugins/group/auto/Localhost 0.16
369 TestNetworkPlugins/group/auto/HairPin 0.17
370 TestNetworkPlugins/group/kindnet/DNS 0.21
371 TestNetworkPlugins/group/kindnet/Localhost 0.16
372 TestNetworkPlugins/group/kindnet/HairPin 0.15
373 TestNetworkPlugins/group/calico/Start 63.57
374 TestNetworkPlugins/group/custom-flannel/Start 56.46
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.39
377 TestNetworkPlugins/group/calico/ControllerPod 6.01
378 TestNetworkPlugins/group/calico/KubeletFlags 0.39
379 TestNetworkPlugins/group/calico/NetCatPod 10.3
380 TestNetworkPlugins/group/custom-flannel/DNS 0.23
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
383 TestNetworkPlugins/group/calico/DNS 0.34
384 TestNetworkPlugins/group/calico/Localhost 0.23
385 TestNetworkPlugins/group/calico/HairPin 0.26
386 TestNetworkPlugins/group/enable-default-cni/Start 74.08
387 TestNetworkPlugins/group/flannel/Start 54.5
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
390 TestNetworkPlugins/group/flannel/NetCatPod 10.29
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
393 TestNetworkPlugins/group/flannel/DNS 0.18
394 TestNetworkPlugins/group/flannel/Localhost 0.16
395 TestNetworkPlugins/group/flannel/HairPin 0.18
396 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
397 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
398 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
399 TestNetworkPlugins/group/bridge/Start 65.33
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
401 TestNetworkPlugins/group/bridge/NetCatPod 8.28
402 TestNetworkPlugins/group/bridge/DNS 0.19
403 TestNetworkPlugins/group/bridge/Localhost 0.15
404 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (5.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-357391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-357391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.345209187s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1229 06:46:24.765968    4352 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1229 06:46:24.766044    4352 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-357391
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-357391: exit status 85 (84.518713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-357391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-357391 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:19
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:19.460809    4358 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:19.461091    4358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:19.461123    4358 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:19.461142    4358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:19.461449    4358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	W1229 06:46:19.461628    4358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22353-2531/.minikube/config/config.json: open /home/jenkins/minikube-integration/22353-2531/.minikube/config/config.json: no such file or directory
	I1229 06:46:19.462092    4358 out.go:368] Setting JSON to true
	I1229 06:46:19.462939    4358 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1730,"bootTime":1766989049,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 06:46:19.463032    4358 start.go:143] virtualization:  
	I1229 06:46:19.468476    4358 out.go:99] [download-only-357391] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1229 06:46:19.468673    4358 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball: no such file or directory
	I1229 06:46:19.468755    4358 notify.go:221] Checking for updates...
	I1229 06:46:19.471984    4358 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:19.475316    4358 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:19.478578    4358 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 06:46:19.481701    4358 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 06:46:19.484776    4358 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 06:46:19.490682    4358 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:19.490950    4358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:19.520170    4358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:46:19.520262    4358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:19.911141    4358 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 06:46:19.90166109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:19.911246    4358 docker.go:319] overlay module found
	I1229 06:46:19.914411    4358 out.go:99] Using the docker driver based on user configuration
	I1229 06:46:19.914453    4358 start.go:309] selected driver: docker
	I1229 06:46:19.914467    4358 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:19.914574    4358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:19.975198    4358 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 06:46:19.96600352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:19.975376    4358 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:19.975671    4358 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 06:46:19.975839    4358 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:19.979113    4358 out.go:171] Using Docker driver with root privileges
	I1229 06:46:19.982042    4358 cni.go:84] Creating CNI manager for ""
	I1229 06:46:19.982107    4358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1229 06:46:19.982123    4358 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 06:46:19.982202    4358 start.go:353] cluster config:
	{Name:download-only-357391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-357391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:46:19.985260    4358 out.go:99] Starting "download-only-357391" primary control-plane node in "download-only-357391" cluster
	I1229 06:46:19.985283    4358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1229 06:46:19.988148    4358 out.go:99] Pulling base image v0.0.48-1766979815-22353 ...
	I1229 06:46:19.988185    4358 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1229 06:46:19.988348    4358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 06:46:20.017413    4358 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:46:20.017620    4358 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 06:46:20.017738    4358 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:46:20.042002    4358 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1229 06:46:20.042048    4358 cache.go:65] Caching tarball of preloaded images
	I1229 06:46:20.042212    4358 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1229 06:46:20.045685    4358 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1229 06:46:20.045726    4358 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1229 06:46:20.045751    4358 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1229 06:46:20.127990    4358 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1229 06:46:20.128133    4358 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1229 06:46:24.162034    4358 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1229 06:46:24.162615    4358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/download-only-357391/config.json ...
	I1229 06:46:24.162661    4358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/download-only-357391/config.json: {Name:mk4eae4ab70c806f81492e8e2028c4269ad4173c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:24.165402    4358 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1229 06:46:24.165993    4358 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-357391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-357391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-357391
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-786544 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-786544 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.498688419s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1229 06:46:28.718799    4352 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 06:46:28.718835    4352 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-786544
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-786544: exit status 85 (86.774453ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-357391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-357391 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-357391                                                                                                                                                               │ download-only-357391 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ -o=json --download-only -p download-only-786544 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-786544 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:25.263847    4563 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:25.264444    4563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:25.264483    4563 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:25.264519    4563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:25.265370    4563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 06:46:25.265959    4563 out.go:368] Setting JSON to true
	I1229 06:46:25.266751    4563 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1736,"bootTime":1766989049,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 06:46:25.266892    4563 start.go:143] virtualization:  
	I1229 06:46:25.270483    4563 out.go:99] [download-only-786544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 06:46:25.270783    4563 notify.go:221] Checking for updates...
	I1229 06:46:25.273587    4563 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:25.276559    4563 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:25.279490    4563 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 06:46:25.282467    4563 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 06:46:25.285365    4563 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 06:46:25.291103    4563 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:25.291389    4563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:25.324638    4563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:46:25.324744    4563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:25.381535    4563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-29 06:46:25.372779711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:25.381632    4563 docker.go:319] overlay module found
	I1229 06:46:25.384645    4563 out.go:99] Using the docker driver based on user configuration
	I1229 06:46:25.384686    4563 start.go:309] selected driver: docker
	I1229 06:46:25.384696    4563 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:25.384802    4563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:25.444358    4563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-29 06:46:25.435495189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:25.444512    4563 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:25.444786    4563 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 06:46:25.444938    4563 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:25.448036    4563 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-786544 host does not exist
	  To start a cluster, run: "minikube start -p download-only-786544"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-786544
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I1229 06:46:29.873428    4352 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-994164 --alsologtostderr --binary-mirror http://127.0.0.1:35713 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-994164" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-994164
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-679786
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-679786: exit status 85 (159.920355ms)

                                                
                                                
-- stdout --
	* Profile "addons-679786" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-679786"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-679786
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-679786: exit status 85 (173.755405ms)

                                                
                                                
-- stdout --
	* Profile "addons-679786" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-679786"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (119.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-679786 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-679786 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m59.669843965s)
--- PASS: TestAddons/Setup (119.67s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.57s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 44.726671ms
addons_test.go:878: volcano-admission stabilized in 45.337959ms
addons_test.go:870: volcano-scheduler stabilized in 45.520442ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-cjdtn" [33ee7045-ff3e-42a5-b866-387d136175bd] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003862159s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-rqc2d" [1ae385d6-83e8-4867-8dea-7f4de3f40ec9] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:353: "volcano-admission-5986c947c8-rqc2d" [1ae385d6-83e8-4867-8dea-7f4de3f40ec9] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 9.004075139s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-xh924" [14b057f5-f86b-41c7-90a7-59ac17ffe9e4] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004322402s
addons_test.go:905: (dbg) Run:  kubectl --context addons-679786 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-679786 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-679786 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [8724a62f-708a-4693-a7fd-0e42f1f20c8e] Pending
helpers_test.go:353: "test-job-nginx-0" [8724a62f-708a-4693-a7fd-0e42f1f20c8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [8724a62f-708a-4693-a7fd-0e42f1f20c8e] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004772014s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable volcano --alsologtostderr -v=1: (11.844609711s)
--- PASS: TestAddons/serial/Volcano (45.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-679786 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-679786 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-679786 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-679786 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b3a49d77-ccd3-46bf-94f9-2098ce894f5d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b3a49d77-ccd3-46bf-94f9-2098ce894f5d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003375591s
addons_test.go:696: (dbg) Run:  kubectl --context addons-679786 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-679786 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-679786 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-679786 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.94208ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-ftxjm" [2161eb93-34e3-4861-9e43-45bb00662311] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003756836s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-gqtft" [5f45c591-9376-4283-a198-20a48546bee9] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003366546s
addons_test.go:394: (dbg) Run:  kubectl --context addons-679786 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-679786 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-679786 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.501052232s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 ip
2025/12/29 06:49:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.50s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 27.677275ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-679786
addons_test.go:334: (dbg) Run:  kubectl --context addons-679786 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-679786 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-679786 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-679786 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [a6414640-11fb-49f5-9975-cb4c56ee801b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [a6414640-11fb-49f5-9975-cb4c56ee801b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003287478s
I1229 06:51:06.595975    4352 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-679786 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable ingress-dns --alsologtostderr -v=1: (1.838736776s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable ingress --alsologtostderr -v=1: (7.896716461s)
--- PASS: TestAddons/parallel/Ingress (17.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-rfc5l" [da17b70e-fc47-40c7-8dca-09892681f101] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003331558s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable inspektor-gadget --alsologtostderr -v=1: (5.756569313s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.133187ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-d8hwd" [1a9aba2e-637e-400e-b55e-56ac0e78b961] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003626703s
addons_test.go:465: (dbg) Run:  kubectl --context addons-679786 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1229 06:49:48.431897    4352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1229 06:49:48.436028    4352 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1229 06:49:48.436056    4352 kapi.go:107] duration metric: took 6.664097ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.674689ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-679786 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-679786 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [b6e3eb6b-19ee-469a-88d7-c7259d507cfe] Pending
helpers_test.go:353: "task-pv-pod" [b6e3eb6b-19ee-469a-88d7-c7259d507cfe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [b6e3eb6b-19ee-469a-88d7-c7259d507cfe] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003578546s
addons_test.go:574: (dbg) Run:  kubectl --context addons-679786 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-679786 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-679786 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-679786 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-679786 delete pod task-pv-pod: (1.017142381s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-679786 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-679786 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-679786 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [b5edc849-2b1c-423f-85ff-2439d603bffb] Pending
helpers_test.go:353: "task-pv-pod-restore" [b5edc849-2b1c-423f-85ff-2439d603bffb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [b5edc849-2b1c-423f-85ff-2439d603bffb] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002978123s
addons_test.go:616: (dbg) Run:  kubectl --context addons-679786 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-679786 delete pod task-pv-pod-restore: (1.152514703s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-679786 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-679786 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable volumesnapshots --alsologtostderr -v=1: (1.022479374s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.973624033s)
--- PASS: TestAddons/parallel/CSI (43.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-679786 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-679786 --alsologtostderr -v=1: (1.006282416s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-wh7hp" [f5b6e0d6-49cf-45af-8424-963602bdda46] Pending
helpers_test.go:353: "headlamp-6d8d595f-wh7hp" [f5b6e0d6-49cf-45af-8424-963602bdda46] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-wh7hp" [f5b6e0d6-49cf-45af-8424-963602bdda46] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004120137s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable headlamp --alsologtostderr -v=1: (5.759921854s)
--- PASS: TestAddons/parallel/Headlamp (16.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-f6vnh" [4f0efacc-af55-4bbc-8357-29dd34f06eb7] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003464057s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-679786 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-679786 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [95456b70-524f-46e3-bac7-8dd2fd021ee2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [95456b70-524f-46e3-bac7-8dd2fd021ee2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [95456b70-524f-46e3-bac7-8dd2fd021ee2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003993333s
addons_test.go:969: (dbg) Run:  kubectl --context addons-679786 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 ssh "cat /opt/local-path-provisioner/pvc-600f5c52-7ff1-40cb-a3bc-f9ca7ad9b2b1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-679786 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-679786 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.113636569s)
--- PASS: TestAddons/parallel/LocalPath (53.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-h64qv" [f52903f4-6ca9-48b7-9796-b0d1fe58daff] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00322922s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-8dv4m" [6873c2dc-c41d-4b55-997b-17a2523ac49a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002922036s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-679786 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-679786 addons disable yakd --alsologtostderr -v=1: (5.961543082s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-679786
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-679786: (12.043263416s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-679786
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-679786
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-679786
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (30.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-264492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-264492 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (28.111574379s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-264492 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-264492 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-264492 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-264492" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-264492
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-264492: (2.079537735s)
--- PASS: TestCertOptions (30.93s)

                                                
                                    
x
+
TestCertExpiration (215.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-688553 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1229 07:28:30.655262    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-688553 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.626885377s)
E1229 07:29:56.867293    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-688553 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-688553 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.628188591s)
helpers_test.go:176: Cleaning up "cert-expiration-688553" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-688553
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-688553: (2.427658655s)
--- PASS: TestCertExpiration (215.69s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.08s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-332992 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-332992 --driver=docker  --container-runtime=containerd: (26.701938346s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-332992"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-332992": (1.10337266s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5r1jtVihqrEU/agent.24087" SSH_AGENT_PID="24088" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5r1jtVihqrEU/agent.24087" SSH_AGENT_PID="24088" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5r1jtVihqrEU/agent.24087" SSH_AGENT_PID="24088" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.318489012s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5r1jtVihqrEU/agent.24087" SSH_AGENT_PID="24088" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-332992" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-332992
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-332992: (2.058632796s)
--- PASS: TestDockerEnvContainerd (42.08s)

                                                
                                    
x
+
TestErrorSpam/setup (28.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-221388 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-221388 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-221388 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-221388 --driver=docker  --container-runtime=containerd: (28.71411017s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (28.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 stop: (1.495889774s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221388 --log_dir /tmp/nospam-221388 stop
--- PASS: TestErrorSpam/stop (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/test/nested/copy/4352/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1229 06:53:30.657314    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.663274    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.673469    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.693951    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.734276    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.814584    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:30.975058    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:31.295693    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:31.936700    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:33.216940    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:35.777173    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:40.897403    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-421974 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (46.008871291s)
--- PASS: TestFunctional/serial/StartWithProxy (46.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1229 06:53:43.751592    4352 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-421974 --alsologtostderr -v=8: (7.185943842s)
functional_test.go:678: soft start took 7.18874638s for "functional-421974" cluster.
I1229 06:53:50.937838    4352 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (7.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-421974 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:3.1
E1229 06:53:51.137767    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:3.1: (1.282007098s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:3.3: (1.172345036s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 cache add registry.k8s.io/pause:latest: (1.085480872s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-421974 /tmp/TestFunctionalserialCacheCmdcacheadd_local4027155165/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache add minikube-local-cache-test:functional-421974
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache delete minikube-local-cache-test:functional-421974
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-421974
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.19501ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 kubectl -- --context functional-421974 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-421974 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1229 06:54:11.618051    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-421974 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.238054025s)
functional_test.go:776: restart took 49.238157977s for "functional-421974" cluster.
I1229 06:54:47.764891    4352 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (49.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-421974 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 logs: (1.445095008s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 logs --file /tmp/TestFunctionalserialLogsFileCmd3973013013/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 logs --file /tmp/TestFunctionalserialLogsFileCmd3973013013/001/logs.txt: (1.516591614s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-421974 apply -f testdata/invalidsvc.yaml
E1229 06:54:52.578704    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-421974
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-421974: exit status 115 (497.72117ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30958 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-421974 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 config get cpus: exit status 14 (86.022499ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 config get cpus: exit status 14 (80.105413ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-421974 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-421974 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 39273: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421974 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.667407ms)

                                                
                                                
-- stdout --
	* [functional-421974] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:55:25.435552   38920 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:55:25.435714   38920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:25.435729   38920 out.go:374] Setting ErrFile to fd 2...
	I1229 06:55:25.435734   38920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:25.435994   38920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 06:55:25.436353   38920 out.go:368] Setting JSON to false
	I1229 06:55:25.437304   38920 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2276,"bootTime":1766989049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 06:55:25.437371   38920 start.go:143] virtualization:  
	I1229 06:55:25.440572   38920 out.go:179] * [functional-421974] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 06:55:25.444291   38920 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:55:25.444630   38920 notify.go:221] Checking for updates...
	I1229 06:55:25.450157   38920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:55:25.452983   38920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 06:55:25.455790   38920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 06:55:25.458616   38920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 06:55:25.461568   38920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:55:25.464926   38920 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 06:55:25.465611   38920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:55:25.500201   38920 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:55:25.500316   38920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:55:25.570426   38920 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 06:55:25.56108817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:55:25.570519   38920 docker.go:319] overlay module found
	I1229 06:55:25.574349   38920 out.go:179] * Using the docker driver based on existing profile
	I1229 06:55:25.577443   38920 start.go:309] selected driver: docker
	I1229 06:55:25.577461   38920 start.go:928] validating driver "docker" against &{Name:functional-421974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-421974 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:55:25.577560   38920 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:55:25.581260   38920 out.go:203] 
	W1229 06:55:25.584437   38920 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1229 06:55:25.587472   38920 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421974 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421974 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (207.572283ms)

                                                
                                                
-- stdout --
	* [functional-421974] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:55:25.231163   38875 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:55:25.231332   38875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:25.231362   38875 out.go:374] Setting ErrFile to fd 2...
	I1229 06:55:25.231381   38875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:25.232454   38875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 06:55:25.232939   38875 out.go:368] Setting JSON to false
	I1229 06:55:25.234004   38875 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2276,"bootTime":1766989049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 06:55:25.234113   38875 start.go:143] virtualization:  
	I1229 06:55:25.237791   38875 out.go:179] * [functional-421974] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1229 06:55:25.240879   38875 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:55:25.240998   38875 notify.go:221] Checking for updates...
	I1229 06:55:25.246694   38875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:55:25.249667   38875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 06:55:25.252587   38875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 06:55:25.255371   38875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 06:55:25.258246   38875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:55:25.261794   38875 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 06:55:25.262402   38875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:55:25.295413   38875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:55:25.295529   38875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:55:25.363244   38875 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 06:55:25.354240563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:55:25.363353   38875 docker.go:319] overlay module found
	I1229 06:55:25.366508   38875 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1229 06:55:25.369371   38875 start.go:309] selected driver: docker
	I1229 06:55:25.369394   38875 start.go:928] validating driver "docker" against &{Name:functional-421974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-421974 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:55:25.369492   38875 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:55:25.373155   38875 out.go:203] 
	W1229 06:55:25.376007   38875 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 06:55:25.378960   38875 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-421974 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-421974 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-jcf46" [a509d9b7-e872-41be-90f0-f4631526f99f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-jcf46" [a509d9b7-e872-41be-90f0-f4631526f99f] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003950656s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31027
functional_test.go:1685: http://192.168.49.2:31027: success! body:
Request served by hello-node-connect-5d95464fd4-jcf46

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31027
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [8e9cd8cc-1608-4d5a-83db-49702f45c479] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003677096s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-421974 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-421974 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-421974 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-421974 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e4b87f02-c52a-47cb-8557-a64b0933a6f9] Pending
helpers_test.go:353: "sp-pod" [e4b87f02-c52a-47cb-8557-a64b0933a6f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e4b87f02-c52a-47cb-8557-a64b0933a6f9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003166171s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-421974 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-421974 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-421974 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2a54486e-c83c-47e0-86f8-56d22e01bbcf] Pending
helpers_test.go:353: "sp-pod" [2a54486e-c83c-47e0-86f8-56d22e01bbcf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003858884s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-421974 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh -n functional-421974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cp functional-421974:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1478886216/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh -n functional-421974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh -n functional-421974 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4352/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /etc/test/nested/copy/4352/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4352.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /etc/ssl/certs/4352.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4352.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /usr/share/ca-certificates/4352.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/43522.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /etc/ssl/certs/43522.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/43522.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /usr/share/ca-certificates/43522.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-421974 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "sudo systemctl is-active docker": exit status 1 (320.749837ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "sudo systemctl is-active crio": exit status 1 (372.97229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 36293: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-421974 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a9a5bef1-e678-468f-b86d-fe2c5a491211] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a9a5bef1-e678-468f-b86d-fe2c5a491211] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003837801s
I1229 06:55:06.325566    4352 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-421974 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.140.154 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-421974 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-421974 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-421974 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-rrx6v" [99b88b9d-7f99-44c5-b7ef-c1cd206d8ba8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-rrx6v" [99b88b9d-7f99-44c5-b7ef-c1cd206d8ba8] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003901186s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "372.359539ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "55.571549ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "355.285638ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "62.857681ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdany-port3413373453/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766991320622375310" to /tmp/TestFunctionalparallelMountCmdany-port3413373453/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766991320622375310" to /tmp/TestFunctionalparallelMountCmdany-port3413373453/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766991320622375310" to /tmp/TestFunctionalparallelMountCmdany-port3413373453/001/test-1766991320622375310
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.455298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:55:20.987052    4352 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 test-1766991320622375310
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh cat /mount-9p/test-1766991320622375310
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-421974 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c9dd2b45-2137-4a86-996a-96747cd80817] Pending
helpers_test.go:353: "busybox-mount" [c9dd2b45-2137-4a86-996a-96747cd80817] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c9dd2b45-2137-4a86-996a-96747cd80817] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c9dd2b45-2137-4a86-996a-96747cd80817] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004018045s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-421974 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdany-port3413373453/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service list -o json
functional_test.go:1509: Took "614.396168ms" to run "out/minikube-linux-arm64 -p functional-421974 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31076
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31076
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdspecific-port3637233388/001:/mount-9p --alsologtostderr -v=1 --port 39445]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (599.525021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:55:29.979965    4352 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdspecific-port3637233388/001:/mount-9p --alsologtostderr -v=1 --port 39445] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "sudo umount -f /mount-9p": exit status 1 (374.776741ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-421974 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdspecific-port3637233388/001:/mount-9p --alsologtostderr -v=1 --port 39445] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T" /mount1: exit status 1 (1.176436464s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T" /mount2
2025/12/29 06:55:33 [DEBUG] GET http://127.0.0.1:39599/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-421974 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2282650241/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 version -o=json --components: (1.314296147s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421974 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-421974
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421974 image ls --format short --alsologtostderr:
I1229 06:55:41.416066   42065 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:41.416225   42065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.416247   42065 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:41.416265   42065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.416684   42065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 06:55:41.417791   42065 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.417987   42065 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.419376   42065 cli_runner.go:164] Run: docker container inspect functional-421974 --format={{.State.Status}}
I1229 06:55:41.451532   42065 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:41.451610   42065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421974
I1229 06:55:41.476753   42065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/functional-421974/id_rsa Username:docker}
I1229 06:55:41.591608   42065 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421974 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test       │ functional-421974                     │ sha256:3dff03 │ 991B   │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-421974                     │ sha256:ce2d2c │ 2.17MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421974 image ls --format table --alsologtostderr:
I1229 06:55:42.037925   42242 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:42.038145   42242 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:42.038174   42242 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:42.038196   42242 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:42.038485   42242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 06:55:42.039148   42242 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:42.039337   42242 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:42.039955   42242 cli_runner.go:164] Run: docker container inspect functional-421974 --format={{.State.Status}}
I1229 06:55:42.063854   42242 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:42.063915   42242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421974
I1229 06:55:42.087024   42242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/functional-421974/id_rsa Username:docker}
I1229 06:55:42.204576   42242 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421974 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoT
ags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:3dff032ae811f7b45c9e7c76d58df95d4b3219f976a91dea815ce37f4277526c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-421974"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22987510"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e
5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4
a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974"],"size":"2173567"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:c96ee3c
17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421974 image ls --format json --alsologtostderr:
I1229 06:55:41.721097   42145 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:41.721202   42145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.721208   42145 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:41.721212   42145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.721520   42145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 06:55:41.722804   42145 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.723154   42145 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.724732   42145 cli_runner.go:164] Run: docker container inspect functional-421974 --format={{.State.Status}}
I1229 06:55:41.763504   42145 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:41.763559   42145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421974
I1229 06:55:41.814694   42145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/functional-421974/id_rsa Username:docker}
I1229 06:55:41.931751   42145 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421974 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:3dff032ae811f7b45c9e7c76d58df95d4b3219f976a91dea815ce37f4277526c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-421974
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
size: "2173567"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421974 image ls --format yaml --alsologtostderr:
I1229 06:55:41.421421   42066 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:41.421613   42066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.421628   42066 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:41.421634   42066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:41.421932   42066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 06:55:41.422626   42066 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.422781   42066 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:41.423385   42066 cli_runner.go:164] Run: docker container inspect functional-421974 --format={{.State.Status}}
I1229 06:55:41.456046   42066 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:41.456116   42066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421974
I1229 06:55:41.490701   42066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/functional-421974/id_rsa Username:docker}
I1229 06:55:41.600988   42066 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421974 ssh pgrep buildkitd: exit status 1 (356.248822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image build -t localhost/my-image:functional-421974 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 image build -t localhost/my-image:functional-421974 testdata/build --alsologtostderr: (3.583371515s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421974 image build -t localhost/my-image:functional-421974 testdata/build --alsologtostderr:
I1229 06:55:42.053489   42247 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:42.053811   42247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:42.053827   42247 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:42.053833   42247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:42.054121   42247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 06:55:42.054757   42247 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:42.056200   42247 config.go:182] Loaded profile config "functional-421974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:55:42.058147   42247 cli_runner.go:164] Run: docker container inspect functional-421974 --format={{.State.Status}}
I1229 06:55:42.082569   42247 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:42.082635   42247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421974
I1229 06:55:42.113184   42247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/functional-421974/id_rsa Username:docker}
I1229 06:55:42.230203   42247 build_images.go:162] Building image from path: /tmp/build.1505126346.tar
I1229 06:55:42.230278   42247 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1229 06:55:42.259149   42247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1505126346.tar
I1229 06:55:42.265510   42247 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1505126346.tar: stat -c "%s %y" /var/lib/minikube/build/build.1505126346.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1505126346.tar': No such file or directory
I1229 06:55:42.265540   42247 ssh_runner.go:362] scp /tmp/build.1505126346.tar --> /var/lib/minikube/build/build.1505126346.tar (3072 bytes)
I1229 06:55:42.289586   42247 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1505126346
I1229 06:55:42.298389   42247 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1505126346 -xf /var/lib/minikube/build/build.1505126346.tar
I1229 06:55:42.307087   42247 containerd.go:402] Building image: /var/lib/minikube/build/build.1505126346
I1229 06:55:42.307158   42247 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1505126346 --local dockerfile=/var/lib/minikube/build/build.1505126346 --output type=image,name=localhost/my-image:functional-421974
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:0ef4f25f1c14a75e1bfca4d5081fa4be9b0863e8510c2e9ce560c21e9dcc6106
#8 exporting manifest sha256:0ef4f25f1c14a75e1bfca4d5081fa4be9b0863e8510c2e9ce560c21e9dcc6106 0.0s done
#8 exporting config sha256:e99a342f40c0c8cd7ca9943c6c8503076df94a208e3efaa78044cbb9a2876e0a 0.0s done
#8 naming to localhost/my-image:functional-421974 done
#8 DONE 0.2s
I1229 06:55:45.538781   42247 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1505126346 --local dockerfile=/var/lib/minikube/build/build.1505126346 --output type=image,name=localhost/my-image:functional-421974: (3.231580278s)
I1229 06:55:45.538862   42247 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1505126346
I1229 06:55:45.547539   42247 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1505126346.tar
I1229 06:55:45.555936   42247 build_images.go:218] Built localhost/my-image:functional-421974 from /tmp/build.1505126346.tar
I1229 06:55:45.555975   42247 build_images.go:134] succeeded building to: functional-421974
I1229 06:55:45.555981   42247 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-421974 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr: (1.099352796s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-421974 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-421974
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-421974
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-421974
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (166.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1229 06:56:14.498954    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:58:30.655010    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m45.810021362s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (166.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 kubectl -- rollout status deployment/busybox: (4.246186989s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-64vxj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-k6p5q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-lpxxf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-64vxj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-k6p5q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-lpxxf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-64vxj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-k6p5q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-lpxxf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-64vxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-64vxj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-k6p5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-k6p5q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-lpxxf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 kubectl -- exec busybox-769dd8b7dd-lpxxf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node add --alsologtostderr -v 5
E1229 06:58:58.339172    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 node add --alsologtostderr -v 5: (29.054971025s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5: (1.068432103s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-675419 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.144588492s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 status --output json --alsologtostderr -v 5: (1.141473016s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp testdata/cp-test.txt ha-675419:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4193278991/001/cp-test_ha-675419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419:/home/docker/cp-test.txt ha-675419-m02:/home/docker/cp-test_ha-675419_ha-675419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test_ha-675419_ha-675419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419:/home/docker/cp-test.txt ha-675419-m03:/home/docker/cp-test_ha-675419_ha-675419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test_ha-675419_ha-675419-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419:/home/docker/cp-test.txt ha-675419-m04:/home/docker/cp-test_ha-675419_ha-675419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test_ha-675419_ha-675419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp testdata/cp-test.txt ha-675419-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4193278991/001/cp-test_ha-675419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m02:/home/docker/cp-test.txt ha-675419:/home/docker/cp-test_ha-675419-m02_ha-675419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test_ha-675419-m02_ha-675419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m02:/home/docker/cp-test.txt ha-675419-m03:/home/docker/cp-test_ha-675419-m02_ha-675419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test_ha-675419-m02_ha-675419-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m02:/home/docker/cp-test.txt ha-675419-m04:/home/docker/cp-test_ha-675419-m02_ha-675419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test_ha-675419-m02_ha-675419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp testdata/cp-test.txt ha-675419-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4193278991/001/cp-test_ha-675419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m03:/home/docker/cp-test.txt ha-675419:/home/docker/cp-test_ha-675419-m03_ha-675419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test_ha-675419-m03_ha-675419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m03:/home/docker/cp-test.txt ha-675419-m02:/home/docker/cp-test_ha-675419-m03_ha-675419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test_ha-675419-m03_ha-675419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m03:/home/docker/cp-test.txt ha-675419-m04:/home/docker/cp-test_ha-675419-m03_ha-675419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test_ha-675419-m03_ha-675419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp testdata/cp-test.txt ha-675419-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4193278991/001/cp-test_ha-675419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m04:/home/docker/cp-test.txt ha-675419:/home/docker/cp-test_ha-675419-m04_ha-675419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419 "sudo cat /home/docker/cp-test_ha-675419-m04_ha-675419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m04:/home/docker/cp-test.txt ha-675419-m02:/home/docker/cp-test_ha-675419-m04_ha-675419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m02 "sudo cat /home/docker/cp-test_ha-675419-m04_ha-675419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 cp ha-675419-m04:/home/docker/cp-test.txt ha-675419-m03:/home/docker/cp-test_ha-675419-m04_ha-675419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 ssh -n ha-675419-m03 "sudo cat /home/docker/cp-test_ha-675419-m04_ha-675419-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 node stop m02 --alsologtostderr -v 5: (12.225871759s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5: exit status 7 (804.015946ms)

                                                
                                                
-- stdout --
	ha-675419
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-675419-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675419-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-675419-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:59:47.995267   58663 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:59:47.996222   58663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:59:47.996270   58663 out.go:374] Setting ErrFile to fd 2...
	I1229 06:59:47.996292   58663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:59:47.996855   58663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 06:59:47.997297   58663 out.go:368] Setting JSON to false
	I1229 06:59:47.997357   58663 mustload.go:66] Loading cluster: ha-675419
	I1229 06:59:47.998087   58663 config.go:182] Loaded profile config "ha-675419": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 06:59:47.998129   58663 status.go:174] checking status of ha-675419 ...
	I1229 06:59:47.998960   58663 cli_runner.go:164] Run: docker container inspect ha-675419 --format={{.State.Status}}
	I1229 06:59:48.001200   58663 notify.go:221] Checking for updates...
	I1229 06:59:48.027462   58663 status.go:371] ha-675419 host status = "Running" (err=<nil>)
	I1229 06:59:48.027490   58663 host.go:66] Checking if "ha-675419" exists ...
	I1229 06:59:48.027814   58663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675419
	I1229 06:59:48.057230   58663 host.go:66] Checking if "ha-675419" exists ...
	I1229 06:59:48.057534   58663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:59:48.057587   58663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675419
	I1229 06:59:48.077288   58663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/ha-675419/id_rsa Username:docker}
	I1229 06:59:48.190404   58663 ssh_runner.go:195] Run: systemctl --version
	I1229 06:59:48.197349   58663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:59:48.210979   58663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:59:48.287126   58663 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-29 06:59:48.274751138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:59:48.287727   58663 kubeconfig.go:125] found "ha-675419" server: "https://192.168.49.254:8443"
	I1229 06:59:48.287760   58663 api_server.go:166] Checking apiserver status ...
	I1229 06:59:48.287811   58663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:59:48.302305   58663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	I1229 06:59:48.312163   58663 api_server.go:192] apiserver freezer: "13:freezer:/docker/eb168034baffa35cdf4828bf47616c130d7e332038a04e4a0dec2659cc2d12c2/kubepods/burstable/pod680755115e6fe481b62417a87ef41ee6/5a4efb0cf7f7f665f40382f47045092fdccd2217841485fb6609d0dea9b1f429"
	I1229 06:59:48.312246   58663 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eb168034baffa35cdf4828bf47616c130d7e332038a04e4a0dec2659cc2d12c2/kubepods/burstable/pod680755115e6fe481b62417a87ef41ee6/5a4efb0cf7f7f665f40382f47045092fdccd2217841485fb6609d0dea9b1f429/freezer.state
	I1229 06:59:48.320484   58663 api_server.go:214] freezer state: "THAWED"
	I1229 06:59:48.320521   58663 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 06:59:48.328930   58663 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 06:59:48.328960   58663 status.go:463] ha-675419 apiserver status = Running (err=<nil>)
	I1229 06:59:48.328971   58663 status.go:176] ha-675419 status: &{Name:ha-675419 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:59:48.329014   58663 status.go:174] checking status of ha-675419-m02 ...
	I1229 06:59:48.329392   58663 cli_runner.go:164] Run: docker container inspect ha-675419-m02 --format={{.State.Status}}
	I1229 06:59:48.348904   58663 status.go:371] ha-675419-m02 host status = "Stopped" (err=<nil>)
	I1229 06:59:48.348957   58663 status.go:384] host is not running, skipping remaining checks
	I1229 06:59:48.348965   58663 status.go:176] ha-675419-m02 status: &{Name:ha-675419-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:59:48.348985   58663 status.go:174] checking status of ha-675419-m03 ...
	I1229 06:59:48.349464   58663 cli_runner.go:164] Run: docker container inspect ha-675419-m03 --format={{.State.Status}}
	I1229 06:59:48.372361   58663 status.go:371] ha-675419-m03 host status = "Running" (err=<nil>)
	I1229 06:59:48.372387   58663 host.go:66] Checking if "ha-675419-m03" exists ...
	I1229 06:59:48.372689   58663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675419-m03
	I1229 06:59:48.391841   58663 host.go:66] Checking if "ha-675419-m03" exists ...
	I1229 06:59:48.392159   58663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:59:48.392203   58663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675419-m03
	I1229 06:59:48.410053   58663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/ha-675419-m03/id_rsa Username:docker}
	I1229 06:59:48.514847   58663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:59:48.528541   58663 kubeconfig.go:125] found "ha-675419" server: "https://192.168.49.254:8443"
	I1229 06:59:48.528570   58663 api_server.go:166] Checking apiserver status ...
	I1229 06:59:48.528657   58663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:59:48.542369   58663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	I1229 06:59:48.551377   58663 api_server.go:192] apiserver freezer: "13:freezer:/docker/b56d16c2e6ce6e148729cf78915c41310e7479b72f70826e808b7d0e16a60fc5/kubepods/burstable/pod385e64f1df27e10d2398221f1e0fc8ab/68be6d60a9e7ed77454bac17f7ef6ff4f51093d65e4843478e15198da5df61f7"
	I1229 06:59:48.551448   58663 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b56d16c2e6ce6e148729cf78915c41310e7479b72f70826e808b7d0e16a60fc5/kubepods/burstable/pod385e64f1df27e10d2398221f1e0fc8ab/68be6d60a9e7ed77454bac17f7ef6ff4f51093d65e4843478e15198da5df61f7/freezer.state
	I1229 06:59:48.559078   58663 api_server.go:214] freezer state: "THAWED"
	I1229 06:59:48.559116   58663 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 06:59:48.569150   58663 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 06:59:48.569181   58663 status.go:463] ha-675419-m03 apiserver status = Running (err=<nil>)
	I1229 06:59:48.569198   58663 status.go:176] ha-675419-m03 status: &{Name:ha-675419-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:59:48.569215   58663 status.go:174] checking status of ha-675419-m04 ...
	I1229 06:59:48.569582   58663 cli_runner.go:164] Run: docker container inspect ha-675419-m04 --format={{.State.Status}}
	I1229 06:59:48.588296   58663 status.go:371] ha-675419-m04 host status = "Running" (err=<nil>)
	I1229 06:59:48.588319   58663 host.go:66] Checking if "ha-675419-m04" exists ...
	I1229 06:59:48.588619   58663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-675419-m04
	I1229 06:59:48.605690   58663 host.go:66] Checking if "ha-675419-m04" exists ...
	I1229 06:59:48.606046   58663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:59:48.606095   58663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-675419-m04
	I1229 06:59:48.625484   58663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/ha-675419-m04/id_rsa Username:docker}
	I1229 06:59:48.730750   58663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:59:48.745016   58663 status.go:176] ha-675419-m04 status: &{Name:ha-675419-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (16.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node start m02 --alsologtostderr -v 5
E1229 06:59:56.867674    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:56.872928    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:56.883173    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:56.903451    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:56.943732    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:57.024613    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:57.185571    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:57.505839    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:58.146884    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:59.427828    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:01.995858    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 node start m02 --alsologtostderr -v 5: (14.82990647s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5: (1.383636911s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (16.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1229 07:00:07.116682    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.22687559s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 stop --alsologtostderr -v 5
E1229 07:00:17.357882    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:37.838164    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 stop --alsologtostderr -v 5: (37.844275208s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 start --wait true --alsologtostderr -v 5
E1229 07:01:18.798688    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 start --wait true --alsologtostderr -v 5: (1m8.739704528s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 node delete m03 --alsologtostderr -v 5: (10.425094975s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 stop --alsologtostderr -v 5
E1229 07:02:40.719118    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 stop --alsologtostderr -v 5: (36.422689212s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5: exit status 7 (113.694356ms)

                                                
                                                
-- stdout --
	ha-675419
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675419-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-675419-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:02:42.670103   73464 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:02:42.670216   73464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:02:42.670227   73464 out.go:374] Setting ErrFile to fd 2...
	I1229 07:02:42.670233   73464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:02:42.670502   73464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:02:42.670710   73464 out.go:368] Setting JSON to false
	I1229 07:02:42.670743   73464 mustload.go:66] Loading cluster: ha-675419
	I1229 07:02:42.670800   73464 notify.go:221] Checking for updates...
	I1229 07:02:42.671158   73464 config.go:182] Loaded profile config "ha-675419": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:02:42.671177   73464 status.go:174] checking status of ha-675419 ...
	I1229 07:02:42.671683   73464 cli_runner.go:164] Run: docker container inspect ha-675419 --format={{.State.Status}}
	I1229 07:02:42.691372   73464 status.go:371] ha-675419 host status = "Stopped" (err=<nil>)
	I1229 07:02:42.691395   73464 status.go:384] host is not running, skipping remaining checks
	I1229 07:02:42.691402   73464 status.go:176] ha-675419 status: &{Name:ha-675419 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:02:42.691432   73464 status.go:174] checking status of ha-675419-m02 ...
	I1229 07:02:42.691788   73464 cli_runner.go:164] Run: docker container inspect ha-675419-m02 --format={{.State.Status}}
	I1229 07:02:42.714089   73464 status.go:371] ha-675419-m02 host status = "Stopped" (err=<nil>)
	I1229 07:02:42.714114   73464 status.go:384] host is not running, skipping remaining checks
	I1229 07:02:42.714121   73464 status.go:176] ha-675419-m02 status: &{Name:ha-675419-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:02:42.714139   73464 status.go:174] checking status of ha-675419-m04 ...
	I1229 07:02:42.714452   73464 cli_runner.go:164] Run: docker container inspect ha-675419-m04 --format={{.State.Status}}
	I1229 07:02:42.734220   73464 status.go:371] ha-675419-m04 host status = "Stopped" (err=<nil>)
	I1229 07:02:42.734243   73464 status.go:384] host is not running, skipping remaining checks
	I1229 07:02:42.734250   73464 status.go:176] ha-675419-m04 status: &{Name:ha-675419-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1229 07:03:30.654991    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.18531843s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (51.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 node add --control-plane --alsologtostderr -v 5: (50.212112348s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-675419 status --alsologtostderr -v 5: (1.081647347s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (51.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.073918788s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-600581 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1229 07:04:56.869200    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:05:24.562073    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-600581 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.560819344s)
--- PASS: TestJSONOutput/start/Command (46.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-600581 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-600581 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-600581 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-600581 --output=json --user=testUser: (5.999902536s)
--- PASS: TestJSONOutput/stop/Command (6.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.45s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-010554 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-010554 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.55522ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"408dd728-ab3c-4ce7-a028-603c1c07c96e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-010554] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5961884a-3e90-4337-af49-02548069ce06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"39298843-e721-4dac-9df2-51df52415266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"775dbda4-b8e7-44ac-8ed9-8e0fc7f2a29f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig"}}
	{"specversion":"1.0","id":"3f148b12-1117-4055-8956-fb57645550a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube"}}
	{"specversion":"1.0","id":"ecb30b5b-1926-4dff-a809-36344dce7c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9511a105-bec2-4f4e-aeef-9f669829a404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dee36d37-88bc-405c-8958-55b2dc6f90a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-010554" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-010554
--- PASS: TestErrorJSONOutput (0.45s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-494276 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-494276 --network=: (31.384360677s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-494276" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-494276
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-494276: (2.128185948s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-491991 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-491991 --network=bridge: (29.324820605s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-491991" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-491991
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-491991: (2.124849624s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.48s)

                                                
                                    
x
+
TestKicExistingNetwork (30.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1229 07:06:50.372417    4352 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:06:50.388935    4352 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:06:50.389062    4352 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1229 07:06:50.389084    4352 cli_runner.go:164] Run: docker network inspect existing-network
W1229 07:06:50.405873    4352 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1229 07:06:50.405909    4352 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1229 07:06:50.405922    4352 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1229 07:06:50.406022    4352 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:06:50.423467    4352 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
I1229 07:06:50.423794    4352 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017efa80}
I1229 07:06:50.424552    4352 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1229 07:06:50.424640    4352 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1229 07:06:50.490457    4352 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-483164 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-483164 --network=existing-network: (27.798822775s)
helpers_test.go:176: Cleaning up "existing-network-483164" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-483164
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-483164: (2.14014863s)
I1229 07:07:20.448529    4352 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.10s)

                                                
                                    
x
+
TestKicCustomSubnet (29.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-744824 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-744824 --subnet=192.168.60.0/24: (27.382645034s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-744824 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-744824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-744824
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-744824: (2.247359343s)
--- PASS: TestKicCustomSubnet (29.66s)

                                                
                                    
x
+
TestKicStaticIP (31.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-291720 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-291720 --static-ip=192.168.200.200: (28.931752759s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-291720 ip
helpers_test.go:176: Cleaning up "static-ip-291720" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-291720
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-291720: (2.176665211s)
--- PASS: TestKicStaticIP (31.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (59.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-580894 --driver=docker  --container-runtime=containerd
E1229 07:08:30.655288    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-580894 --driver=docker  --container-runtime=containerd: (26.396223641s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-583971 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-583971 --driver=docker  --container-runtime=containerd: (27.476818777s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-580894
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-583971
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-583971" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-583971
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-583971: (2.088257858s)
helpers_test.go:176: Cleaning up "first-580894" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-580894
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-580894: (2.376000403s)
--- PASS: TestMinikubeProfile (59.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-787013 --memory=3072 --mount-string /tmp/TestMountStartserial2332922020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-787013 --memory=3072 --mount-string /tmp/TestMountStartserial2332922020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.349815445s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-787013 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-788744 --memory=3072 --mount-string /tmp/TestMountStartserial2332922020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-788744 --memory=3072 --mount-string /tmp/TestMountStartserial2332922020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.543527803s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788744 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-787013 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-787013 --alsologtostderr -v=5: (1.720473496s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788744 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-788744
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-788744: (1.325759874s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-788744
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-788744: (6.698655966s)
--- PASS: TestMountStart/serial/RestartStopped (7.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788744 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-046403 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1229 07:09:53.700033    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:09:56.869166    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-046403 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.702072685s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-046403 -- rollout status deployment/busybox: (3.054641901s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-9dbv4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-f552n -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-9dbv4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-f552n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-9dbv4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-f552n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-9dbv4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-9dbv4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-f552n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-046403 -- exec busybox-769dd8b7dd-f552n -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-046403 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-046403 -v=5 --alsologtostderr: (27.002882701s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-046403 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp testdata/cp-test.txt multinode-046403:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile393213749/001/cp-test_multinode-046403.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403:/home/docker/cp-test.txt multinode-046403-m02:/home/docker/cp-test_multinode-046403_multinode-046403-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test_multinode-046403_multinode-046403-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403:/home/docker/cp-test.txt multinode-046403-m03:/home/docker/cp-test_multinode-046403_multinode-046403-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test_multinode-046403_multinode-046403-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp testdata/cp-test.txt multinode-046403-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile393213749/001/cp-test_multinode-046403-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m02:/home/docker/cp-test.txt multinode-046403:/home/docker/cp-test_multinode-046403-m02_multinode-046403.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test_multinode-046403-m02_multinode-046403.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m02:/home/docker/cp-test.txt multinode-046403-m03:/home/docker/cp-test_multinode-046403-m02_multinode-046403-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test_multinode-046403-m02_multinode-046403-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp testdata/cp-test.txt multinode-046403-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile393213749/001/cp-test_multinode-046403-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m03:/home/docker/cp-test.txt multinode-046403:/home/docker/cp-test_multinode-046403-m03_multinode-046403.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403 "sudo cat /home/docker/cp-test_multinode-046403-m03_multinode-046403.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 cp multinode-046403-m03:/home/docker/cp-test.txt multinode-046403-m02:/home/docker/cp-test_multinode-046403-m03_multinode-046403-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 ssh -n multinode-046403-m02 "sudo cat /home/docker/cp-test_multinode-046403-m03_multinode-046403-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-046403 node stop m03: (1.317626145s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-046403 status: exit status 7 (550.880907ms)

                                                
                                                
-- stdout --
	multinode-046403
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-046403-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-046403-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr: exit status 7 (536.010234ms)

                                                
                                                
-- stdout --
	multinode-046403
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-046403-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-046403-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:11:54.347976  126998 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:11:54.348082  126998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:54.348093  126998 out.go:374] Setting ErrFile to fd 2...
	I1229 07:11:54.348098  126998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:54.348440  126998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:11:54.348649  126998 out.go:368] Setting JSON to false
	I1229 07:11:54.348674  126998 mustload.go:66] Loading cluster: multinode-046403
	I1229 07:11:54.349602  126998 notify.go:221] Checking for updates...
	I1229 07:11:54.350103  126998 config.go:182] Loaded profile config "multinode-046403": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:11:54.350278  126998 status.go:174] checking status of multinode-046403 ...
	I1229 07:11:54.350871  126998 cli_runner.go:164] Run: docker container inspect multinode-046403 --format={{.State.Status}}
	I1229 07:11:54.369917  126998 status.go:371] multinode-046403 host status = "Running" (err=<nil>)
	I1229 07:11:54.369939  126998 host.go:66] Checking if "multinode-046403" exists ...
	I1229 07:11:54.370264  126998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-046403
	I1229 07:11:54.399270  126998 host.go:66] Checking if "multinode-046403" exists ...
	I1229 07:11:54.399589  126998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:11:54.399646  126998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-046403
	I1229 07:11:54.422055  126998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/multinode-046403/id_rsa Username:docker}
	I1229 07:11:54.526540  126998 ssh_runner.go:195] Run: systemctl --version
	I1229 07:11:54.532746  126998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:11:54.545554  126998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:11:54.600274  126998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:11:54.590878048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:11:54.600795  126998 kubeconfig.go:125] found "multinode-046403" server: "https://192.168.67.2:8443"
	I1229 07:11:54.600828  126998 api_server.go:166] Checking apiserver status ...
	I1229 07:11:54.600878  126998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:11:54.612736  126998 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1402/cgroup
	I1229 07:11:54.621133  126998 api_server.go:192] apiserver freezer: "13:freezer:/docker/0cc28a0ce2bb5b7be5dac1f5d876120049affe85911210a1f6f87bc575a4c1d9/kubepods/burstable/poda8ec638257d290c51e78fdcc7709a95b/4aaca66ab1090880b1d53ca66e58bd8e9ad3ad80e8ffb38fe8c1c3bf054ced05"
	I1229 07:11:54.621201  126998 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0cc28a0ce2bb5b7be5dac1f5d876120049affe85911210a1f6f87bc575a4c1d9/kubepods/burstable/poda8ec638257d290c51e78fdcc7709a95b/4aaca66ab1090880b1d53ca66e58bd8e9ad3ad80e8ffb38fe8c1c3bf054ced05/freezer.state
	I1229 07:11:54.628524  126998 api_server.go:214] freezer state: "THAWED"
	I1229 07:11:54.628554  126998 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1229 07:11:54.636826  126998 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1229 07:11:54.636854  126998 status.go:463] multinode-046403 apiserver status = Running (err=<nil>)
	I1229 07:11:54.636864  126998 status.go:176] multinode-046403 status: &{Name:multinode-046403 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:11:54.636884  126998 status.go:174] checking status of multinode-046403-m02 ...
	I1229 07:11:54.637507  126998 cli_runner.go:164] Run: docker container inspect multinode-046403-m02 --format={{.State.Status}}
	I1229 07:11:54.655275  126998 status.go:371] multinode-046403-m02 host status = "Running" (err=<nil>)
	I1229 07:11:54.655296  126998 host.go:66] Checking if "multinode-046403-m02" exists ...
	I1229 07:11:54.655597  126998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-046403-m02
	I1229 07:11:54.672579  126998 host.go:66] Checking if "multinode-046403-m02" exists ...
	I1229 07:11:54.672884  126998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:11:54.672921  126998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-046403-m02
	I1229 07:11:54.692798  126998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/multinode-046403-m02/id_rsa Username:docker}
	I1229 07:11:54.798151  126998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:11:54.811155  126998 status.go:176] multinode-046403-m02 status: &{Name:multinode-046403-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:11:54.811187  126998 status.go:174] checking status of multinode-046403-m03 ...
	I1229 07:11:54.811497  126998 cli_runner.go:164] Run: docker container inspect multinode-046403-m03 --format={{.State.Status}}
	I1229 07:11:54.829791  126998 status.go:371] multinode-046403-m03 host status = "Stopped" (err=<nil>)
	I1229 07:11:54.829814  126998 status.go:384] host is not running, skipping remaining checks
	I1229 07:11:54.829821  126998 status.go:176] multinode-046403-m03 status: &{Name:multinode-046403-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-046403 node start m03 -v=5 --alsologtostderr: (7.308178066s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-046403
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-046403
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-046403: (25.162623126s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-046403 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-046403 --wait=true -v=5 --alsologtostderr: (55.422002426s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-046403
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-046403 node delete m03: (4.984306999s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 stop
E1229 07:13:30.655581    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-046403 stop: (23.937086036s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-046403 status: exit status 7 (99.021856ms)

                                                
                                                
-- stdout --
	multinode-046403
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-046403-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr: exit status 7 (97.854806ms)

                                                
                                                
-- stdout --
	multinode-046403
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-046403-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:13:53.413233  135830 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:13:53.413341  135830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:13:53.413350  135830 out.go:374] Setting ErrFile to fd 2...
	I1229 07:13:53.413355  135830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:13:53.413612  135830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:13:53.413794  135830 out.go:368] Setting JSON to false
	I1229 07:13:53.413826  135830 mustload.go:66] Loading cluster: multinode-046403
	I1229 07:13:53.414285  135830 config.go:182] Loaded profile config "multinode-046403": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:13:53.414304  135830 status.go:174] checking status of multinode-046403 ...
	I1229 07:13:53.414810  135830 cli_runner.go:164] Run: docker container inspect multinode-046403 --format={{.State.Status}}
	I1229 07:13:53.415059  135830 notify.go:221] Checking for updates...
	I1229 07:13:53.434607  135830 status.go:371] multinode-046403 host status = "Stopped" (err=<nil>)
	I1229 07:13:53.434630  135830 status.go:384] host is not running, skipping remaining checks
	I1229 07:13:53.434638  135830 status.go:176] multinode-046403 status: &{Name:multinode-046403 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:13:53.434667  135830 status.go:174] checking status of multinode-046403-m02 ...
	I1229 07:13:53.434970  135830 cli_runner.go:164] Run: docker container inspect multinode-046403-m02 --format={{.State.Status}}
	I1229 07:13:53.455064  135830 status.go:371] multinode-046403-m02 host status = "Stopped" (err=<nil>)
	I1229 07:13:53.455089  135830 status.go:384] host is not running, skipping remaining checks
	I1229 07:13:53.455096  135830 status.go:176] multinode-046403-m02 status: &{Name:multinode-046403-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-046403 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-046403 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.457120261s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-046403 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-046403
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-046403-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-046403-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.203623ms)

                                                
                                                
-- stdout --
	* [multinode-046403-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-046403-m02' is duplicated with machine name 'multinode-046403-m02' in profile 'multinode-046403'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-046403-m03 --driver=docker  --container-runtime=containerd
E1229 07:14:56.869190    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-046403-m03 --driver=docker  --container-runtime=containerd: (27.934654005s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-046403
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-046403: exit status 80 (349.80214ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-046403 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-046403-m03 already exists in multinode-046403-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-046403-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-046403-m03: (2.038429117s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.47s)

                                                
                                    
x
+
TestScheduledStopUnix (103.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-052384 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-052384 --memory=3072 --driver=docker  --container-runtime=containerd: (26.872961453s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-052384 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:15:50.265244  145470 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:15:50.265494  145470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:50.265522  145470 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:50.265541  145470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:50.265819  145470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:15:50.266164  145470 out.go:368] Setting JSON to false
	I1229 07:15:50.266335  145470 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:15:50.266729  145470 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:15:50.266848  145470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/scheduled-stop-052384/config.json ...
	I1229 07:15:50.267071  145470 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:15:50.267232  145470 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-052384 -n scheduled-stop-052384
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-052384 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:15:50.714036  145560 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:15:50.714198  145560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:50.714223  145560 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:50.714241  145560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:50.714551  145560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:15:50.714886  145560 out.go:368] Setting JSON to false
	I1229 07:15:50.715133  145560 daemonize_unix.go:73] killing process 145486 as it is an old scheduled stop
	I1229 07:15:50.718507  145560 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:15:50.718989  145560 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:15:50.719092  145560 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/scheduled-stop-052384/config.json ...
	I1229 07:15:50.719324  145560 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:15:50.719495  145560 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1229 07:15:50.725555    4352 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/scheduled-stop-052384/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-052384 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-052384 -n scheduled-stop-052384
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-052384
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-052384 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:16:16.684647  146252 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:16.684831  146252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:16.684843  146252 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:16.684850  146252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:16.685279  146252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:16:16.685642  146252 out.go:368] Setting JSON to false
	I1229 07:16:16.685785  146252 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:16:16.686499  146252 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:16:16.686651  146252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/scheduled-stop-052384/config.json ...
	I1229 07:16:16.686890  146252 mustload.go:66] Loading cluster: scheduled-stop-052384
	I1229 07:16:16.687075  146252 config.go:182] Loaded profile config "scheduled-stop-052384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E1229 07:16:19.924934    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-052384
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-052384: exit status 7 (78.793778ms)

                                                
                                                
-- stdout --
	scheduled-stop-052384
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-052384 -n scheduled-stop-052384
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-052384 -n scheduled-stop-052384: exit status 7 (74.453196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-052384" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-052384
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-052384: (4.71467082s)
--- PASS: TestScheduledStopUnix (103.22s)

                                                
                                    
x
+
TestInsufficientStorage (12.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-623886 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-623886 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.187208234s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b90eb2c-b3af-419d-89ca-9b4aee6f9f32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-623886] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cba4393-b472-4943-a478-0be584950749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"90200632-91e6-4d07-9548-85057b32ab90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b291754a-0f83-4faa-90f2-b060e6d8d8c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig"}}
	{"specversion":"1.0","id":"37d588b4-f11a-48a7-b049-e36fdf8f9899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube"}}
	{"specversion":"1.0","id":"1eed10cf-42d3-4481-9c00-298cde5106c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f77bc206-a840-4bcf-a483-1ca10533183a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"745cff13-e54f-4fcf-aa2b-8e667e7a4b21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"13084489-22b5-462c-95a5-f3313a919602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"399f1b54-4aeb-41d1-86db-959e8838d7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"88137f29-416f-48e1-b7b3-514b11b91e43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7b5439c4-f882-4b43-b368-b197b1877081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-623886\" primary control-plane node in \"insufficient-storage-623886\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"df6bede1-ec77-42bd-87b1-9779cf901c72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766979815-22353 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9b99abc-e536-4f06-842b-dfc95828cf96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"260c3c82-3f6d-4f8b-953d-eed5c7bf2168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-623886 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-623886 --output=json --layout=cluster: exit status 7 (312.885895ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-623886","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-623886","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:17:17.046157  148130 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-623886" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-623886 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-623886 --output=json --layout=cluster: exit status 7 (312.971262ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-623886","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-623886","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:17:17.360818  148196 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-623886" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig
	E1229 07:17:17.370987  148196 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/insufficient-storage-623886/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-623886" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-623886
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-623886: (1.934891406s)
--- PASS: TestInsufficientStorage (12.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2603008374 start -p running-upgrade-999810 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2603008374 start -p running-upgrade-999810 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (37.314770706s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-999810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1229 07:24:56.867741    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-999810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.282892601s)
helpers_test.go:176: Cleaning up "running-upgrade-999810" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-999810
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-999810: (2.199842467s)
--- PASS: TestRunningBinaryUpgrade (73.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (334.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1229 07:18:30.654892    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.117792112s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-542392 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-542392 --alsologtostderr: (1.40776667s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-542392 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-542392 status --format={{.Host}}: exit status 7 (89.022661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.298762857s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-542392 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (91.568107ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-542392] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-542392
	    minikube start -p kubernetes-upgrade-542392 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5423922 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-542392 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-542392 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.226221527s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-542392" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-542392
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-542392: (2.089805131s)
--- PASS: TestKubernetesUpgrade (334.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (156.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2601320556 start -p missing-upgrade-557536 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2601320556 start -p missing-upgrade-557536 --memory=3072 --driver=docker  --container-runtime=containerd: (1m6.273091249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-557536
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-557536: (1.002923036s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-557536
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-557536 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-557536 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m25.477324466s)
helpers_test.go:176: Cleaning up "missing-upgrade-557536" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-557536
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-557536: (2.106705155s)
--- PASS: TestMissingContainerUpgrade (156.58s)

                                                
                                    
x
+
TestPause/serial/Start (54.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017011 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-017011 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (54.359657917s)
--- PASS: TestPause/serial/Start (54.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017011 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-017011 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.202324067s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-017011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-017011 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-017011 --output=json --layout=cluster: exit status 2 (412.100523ms)

                                                
                                                
-- stdout --
	{"Name":"pause-017011","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-017011","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-017011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-017011 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-017011 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-017011 --alsologtostderr -v=5: (3.465496726s)
--- PASS: TestPause/serial/DeletePaused (3.47s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-017011
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-017011: exit status 1 (29.778962ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-017011: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1229 07:19:56.866923    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (1.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3216768165 start -p stopped-upgrade-801497 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3216768165 start -p stopped-upgrade-801497 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.933526142s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3216768165 -p stopped-upgrade-801497 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3216768165 -p stopped-upgrade-801497 stop: (1.258508599s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-801497 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1229 07:23:30.654452    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-801497 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.245340298s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-801497
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-801497: (3.076855534s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (66.24s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-458991 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-458991 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (59.39006407s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-458991 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-458991
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-458991: (5.925006817s)
--- PASS: TestPreload/Start-NoPreload-PullImage (66.24s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.74s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-458991 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1229 07:26:33.700286    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-458991 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (47.502885678s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-458991 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (47.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (89.322361ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-536464] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536464 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536464 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.028287274s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-536464 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (13.997335251s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-536464 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-536464 status -o json: exit status 2 (318.264001ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-536464","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-536464
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-536464: (2.017446619s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536464 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.802559813s)
--- PASS: TestNoKubernetes/serial/Start (7.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-536464 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-536464 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.638524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-536464
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-536464: (1.335212435s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536464 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536464 --driver=docker  --container-runtime=containerd: (6.96496752s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-536464 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-536464 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.208377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-343069 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-343069 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (206.218943ms)

                                                
                                                
-- stdout --
	* [false-343069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:28:11.815799  203944 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:28:11.815984  203944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:28:11.815998  203944 out.go:374] Setting ErrFile to fd 2...
	I1229 07:28:11.816015  203944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:28:11.816415  203944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
	I1229 07:28:11.816966  203944 out.go:368] Setting JSON to false
	I1229 07:28:11.818342  203944 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4243,"bootTime":1766989049,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1229 07:28:11.818482  203944 start.go:143] virtualization:  
	I1229 07:28:11.821981  203944 out.go:179] * [false-343069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:28:11.825795  203944 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:28:11.825936  203944 notify.go:221] Checking for updates...
	I1229 07:28:11.831822  203944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:28:11.834859  203944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
	I1229 07:28:11.837938  203944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
	I1229 07:28:11.841002  203944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:28:11.843912  203944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:28:11.847411  203944 config.go:182] Loaded profile config "force-systemd-env-765623": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1229 07:28:11.847520  203944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:28:11.872193  203944 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:28:11.872327  203944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:28:11.951235  203944 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:28:11.93332882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:28:11.951344  203944 docker.go:319] overlay module found
	I1229 07:28:11.954561  203944 out.go:179] * Using the docker driver based on user configuration
	I1229 07:28:11.957503  203944 start.go:309] selected driver: docker
	I1229 07:28:11.957528  203944 start.go:928] validating driver "docker" against <nil>
	I1229 07:28:11.957542  203944 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:28:11.961239  203944 out.go:203] 
	W1229 07:28:11.964143  203944 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1229 07:28:11.967245  203944 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-343069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-343069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-343069"

                                                
                                                
----------------------- debugLogs end: false-343069 [took: 3.245966892s] --------------------------------
helpers_test.go:176: Cleaning up "false-343069" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-343069
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1229 07:34:56.867044    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.863969975s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-599664 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [85b4b386-58b5-437f-83f9-d476115410ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [85b4b386-58b5-437f-83f9-d476115410ff] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003536311s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-599664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-599664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-599664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050402182s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-599664 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-599664 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-599664 --alsologtostderr -v=3: (12.134868033s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-599664 -n old-k8s-version-599664
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-599664 -n old-k8s-version-599664: exit status 7 (82.857588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-599664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.694987131s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-599664 -n old-k8s-version-599664
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wjpbn" [0eae367f-648f-46bf-b425-1400bc49149a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003327292s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wjpbn" [0eae367f-648f-46bf-b425-1400bc49149a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003188443s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-599664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-599664 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-599664 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-599664 -n old-k8s-version-599664
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-599664 -n old-k8s-version-599664: exit status 2 (327.677005ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-599664 -n old-k8s-version-599664
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-599664 -n old-k8s-version-599664: exit status 2 (345.033088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-599664 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-599664 -n old-k8s-version-599664
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-599664 -n old-k8s-version-599664
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (48.498750041s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-294279 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bdb11985-4b70-49d9-b33b-345a8c6968a1] Pending
helpers_test.go:353: "busybox" [bdb11985-4b70-49d9-b33b-345a8c6968a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bdb11985-4b70-49d9-b33b-345a8c6968a1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004404338s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-294279 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-294279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-294279 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-294279 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-294279 --alsologtostderr -v=3: (12.138317228s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-294279 -n embed-certs-294279
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-294279 -n embed-certs-294279: exit status 7 (70.778309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-294279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1229 07:38:30.654299    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.048682792s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-294279 -n embed-certs-294279
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7f6nm" [8b496e92-536c-4e62-b55d-b1edd21b135b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003582872s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-7f6nm" [8b496e92-536c-4e62-b55d-b1edd21b135b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003283645s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-294279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-294279 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-294279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-294279 -n embed-certs-294279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-294279 -n embed-certs-294279: exit status 2 (347.701695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-294279 -n embed-certs-294279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-294279 -n embed-certs-294279: exit status 2 (337.804119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-294279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-294279 -n embed-certs-294279
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-294279 -n embed-certs-294279
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1229 07:39:56.867331    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (52.426251179s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-918033 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d5de910d-6d3a-471d-9b19-89574541b5a4] Pending
helpers_test.go:353: "busybox" [d5de910d-6d3a-471d-9b19-89574541b5a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d5de910d-6d3a-471d-9b19-89574541b5a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004246684s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-918033 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-918033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-918033 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-918033 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-918033 --alsologtostderr -v=3: (12.62462164s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-480455 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1229 07:40:20.058505    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:22.619612    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-480455 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (53.051531076s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-918033 -n no-preload-918033
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-918033 -n no-preload-918033: exit status 7 (124.930674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-918033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1229 07:40:27.740320    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:37.980545    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:58.460966    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (53.791962924s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-918033 -n no-preload-918033
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-480455 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6ebac2dc-9d58-4085-9f40-e96b995be574] Pending
helpers_test.go:353: "busybox" [6ebac2dc-9d58-4085-9f40-e96b995be574] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6ebac2dc-9d58-4085-9f40-e96b995be574] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004107471s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-480455 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cqkvt" [d2392633-4334-4166-b6cb-567413cf4fee] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003853905s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-480455 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-480455 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-480455 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-480455 --alsologtostderr -v=3: (12.25233068s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cqkvt" [d2392633-4334-4166-b6cb-567413cf4fee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003957563s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-918033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-918033 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-918033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-918033 -n no-preload-918033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-918033 -n no-preload-918033: exit status 2 (323.097429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-918033 -n no-preload-918033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-918033 -n no-preload-918033: exit status 2 (319.167739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-918033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-918033 -n no-preload-918033
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-918033 -n no-preload-918033
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455: exit status 7 (118.666187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-480455 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-480455 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-480455 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (58.35703471s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-700464 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1229 07:41:39.421485    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-700464 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (37.803563884s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-700464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-700464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.224291846s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-700464 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-700464 --alsologtostderr -v=3: (1.57756152s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-700464 -n newest-cni-700464
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-700464 -n newest-cni-700464: exit status 7 (70.072248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-700464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-700464 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-700464 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (14.635950035s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-700464 -n newest-cni-700464
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-57jl7" [425feaab-b072-455b-9949-8c6a3deb863b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002817146s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-700464 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-700464 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-700464 -n newest-cni-700464
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-700464 -n newest-cni-700464: exit status 2 (332.682548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-700464 -n newest-cni-700464
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-700464 -n newest-cni-700464: exit status 2 (337.508551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-700464 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-700464 -n newest-cni-700464
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-700464 -n newest-cni-700464
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-57jl7" [425feaab-b072-455b-9949-8c6a3deb863b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004515792s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-480455 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.46s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-742555 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-742555 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.275741473s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-742555" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-742555
--- PASS: TestPreload/PreloadSrc/gcs (4.46s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (6.19s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-772184 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-772184 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (5.800710791s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-772184" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-772184
--- PASS: TestPreload/PreloadSrc/github (6.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-480455 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-480455 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455: exit status 2 (407.680211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455: exit status 2 (425.16135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-480455 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480455 -n default-k8s-diff-port-480455
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.23s)
E1229 07:48:30.654451    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.119963    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.125305    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.135698    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.156095    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.196505    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.276923    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.437412    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:41.758231    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:42.398679    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:43.679379    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.576463    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.581780    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.592086    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.612401    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.652699    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.732945    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:44.893693    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:45.214307    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:45.855458    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:46.239650    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:47.136227    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:49.696475    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:48:51.360499    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/kindnet-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.04s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-175977 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-175977" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-175977
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.354802361s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1229 07:43:01.341870    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:43:13.701168    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:43:30.654534    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (47.790576667s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-rh8w8" [5c139e68-f80f-4ab3-b3ff-d959bdf1fb01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006351849s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-343069 "pgrep -a kubelet"
I1229 07:43:44.316359    4352 config.go:182] Loaded profile config "auto-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-x9cfj" [6e5ac7ac-286c-4fc6-a33a-7ddb3253390f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-x9cfj" [6e5ac7ac-286c-4fc6-a33a-7ddb3253390f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004117707s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-343069 "pgrep -a kubelet"
I1229 07:43:47.517316    4352 config.go:182] Loaded profile config "kindnet-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zch9p" [88b4f4af-a1b4-490b-91cd-f232fd1229fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zch9p" [88b4f4af-a1b4-490b-91cd-f232fd1229fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003891071s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.570605218s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1229 07:44:56.867668    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:00.836563    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:00.847894    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:00.860578    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:00.884063    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:00.924461    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:01.004795    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:01.165274    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:01.485466    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:02.126257    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:03.406522    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:05.966890    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:11.087492    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:45:17.495543    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.463151835s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-343069 "pgrep -a kubelet"
I1229 07:45:21.234724    4352 config.go:182] Loaded profile config "custom-flannel-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-343069 replace --force -f testdata/netcat-deployment.yaml
E1229 07:45:21.328326    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bqk82" [f5da3bdd-7a6d-4a52-b2bf-5e93eb92f1ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bqk82" [f5da3bdd-7a6d-4a52-b2bf-5e93eb92f1ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003685322s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-s8vbw" [e12e3d0d-8703-4648-b078-d742de28a2d3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-s8vbw" [e12e3d0d-8703-4648-b078-d742de28a2d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004082077s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-343069 "pgrep -a kubelet"
I1229 07:45:30.211580    4352 config.go:182] Loaded profile config "calico-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-n52kz" [3421f07e-b037-45e9-a7c9-d1e6c4f5347a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-n52kz" [3421f07e-b037-45e9-a7c9-d1e6c4f5347a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004674836s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m14.077099717s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1229 07:46:12.768615    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:12.773853    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:12.784584    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:12.804815    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:12.845448    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:12.925592    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:13.085949    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:13.407497    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:14.048684    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:15.328858    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:17.889763    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:22.769708    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:23.010758    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:33.251939    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:46:53.732687    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.501851498s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-9mhcg" [ae0dfd7a-fa25-403c-ba73-23dfcd54875c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004594665s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-343069 "pgrep -a kubelet"
I1229 07:47:09.303091    4352 config.go:182] Loaded profile config "flannel-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-grqvp" [cbecd5e0-05ea-4435-93c7-806bab6589ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-grqvp" [cbecd5e0-05ea-4435-93c7-806bab6589ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003389368s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-343069 "pgrep -a kubelet"
I1229 07:47:11.216204    4352 config.go:182] Loaded profile config "enable-default-cni-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pg5d9" [4604812e-32c8-4a6a-93be-6115059a90ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pg5d9" [4604812e-32c8-4a6a-93be-6115059a90ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004048038s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-343069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m5.327065671s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-343069 "pgrep -a kubelet"
I1229 07:48:52.366911    4352 config.go:182] Loaded profile config "bridge-343069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-343069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mj277" [047bf3fc-a0db-4e27-b0ca-1fa86f686dbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 07:48:54.817494    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/auto-343069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-mj277" [047bf3fc-a0db-4e27-b0ca-1fa86f686dbb] Running
E1229 07:48:56.614012    4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/default-k8s-diff-port-480455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003980255s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-343069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-343069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-696983 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-696983" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-696983
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-948437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-948437
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-343069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-343069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-343069"

                                                
                                                
----------------------- debugLogs end: kubenet-343069 [took: 3.310879538s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-343069" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-343069
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-343069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-343069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-343069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-343069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-343069"

                                                
                                                
----------------------- debugLogs end: cilium-343069 [took: 3.740410906s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-343069" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-343069
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard