Test Report: Docker_Linux_docker_arm64 22427

                    
                      f815509b9ccb41a33be05aa7241c338e7909bf25:2026-01-10:43184
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 507.82
53 TestForceSystemdEnv 507.61
x
+
TestForceSystemdFlag (507.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0110 08:59:28.841245    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:00:54.652635    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.535383    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.540737    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.551097    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.571502    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.611863    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.692174    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.852633    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:21.173259    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:21.814390    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:23.094957    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:25.655200    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:30.775474    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:41.015788    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:51.602444    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:03:01.496125    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:03:42.457333    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:04:28.846643    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:05:04.379446    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:07:20.535444    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m23.478153773s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-573381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-573381" primary control-plane node in "force-systemd-flag-573381" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:59:08.534283  226492 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:59:08.534423  226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:08.534431  226492 out.go:374] Setting ErrFile to fd 2...
	I0110 08:59:08.534436  226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:08.534716  226492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:59:08.535124  226492 out.go:368] Setting JSON to false
	I0110 08:59:08.535945  226492 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2501,"bootTime":1768033048,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:59:08.536012  226492 start.go:143] virtualization:  
	I0110 08:59:08.540087  226492 out.go:179] * [force-systemd-flag-573381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:59:08.544100  226492 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:59:08.544407  226492 notify.go:221] Checking for updates...
	I0110 08:59:08.550278  226492 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:59:08.553314  226492 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:59:08.556418  226492 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:59:08.559460  226492 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:59:08.562977  226492 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:59:08.566347  226492 config.go:182] Loaded profile config "force-systemd-env-861581": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:08.566466  226492 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:59:08.606961  226492 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:59:08.607065  226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:08.717444  226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.708565746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:08.717541  226492 docker.go:319] overlay module found
	I0110 08:59:08.721224  226492 out.go:179] * Using the docker driver based on user configuration
	I0110 08:59:08.724306  226492 start.go:309] selected driver: docker
	I0110 08:59:08.724327  226492 start.go:928] validating driver "docker" against <nil>
	I0110 08:59:08.724341  226492 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:59:08.724965  226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:08.818940  226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.808836061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:08.819091  226492 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:59:08.819299  226492 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:59:08.821663  226492 out.go:179] * Using Docker driver with root privileges
	I0110 08:59:08.824752  226492 cni.go:84] Creating CNI manager for ""
	I0110 08:59:08.824824  226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:08.824834  226492 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 08:59:08.824913  226492 start.go:353] cluster config:
	{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:08.829730  226492 out.go:179] * Starting "force-systemd-flag-573381" primary control-plane node in "force-systemd-flag-573381" cluster
	I0110 08:59:08.832988  226492 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 08:59:08.835974  226492 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:59:08.838696  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:08.838738  226492 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 08:59:08.838749  226492 cache.go:65] Caching tarball of preloaded images
	I0110 08:59:08.838829  226492 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 08:59:08.838837  226492 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 08:59:08.838952  226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
	I0110 08:59:08.838969  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json: {Name:mk792ad7b15ee4a35e6dcc78722d34e91cdf2a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:08.839095  226492 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:59:08.864802  226492 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:59:08.864821  226492 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:59:08.864835  226492 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:59:08.864865  226492 start.go:360] acquireMachinesLock for force-systemd-flag-573381: {Name:mk03eb5fbb2bba12d438b336944081d9ef274656 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:59:08.864956  226492 start.go:364] duration metric: took 76.341µs to acquireMachinesLock for "force-systemd-flag-573381"
	I0110 08:59:08.864979  226492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 08:59:08.865046  226492 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:59:08.868543  226492 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:59:08.868782  226492 start.go:159] libmachine.API.Create for "force-systemd-flag-573381" (driver="docker")
	I0110 08:59:08.868812  226492 client.go:173] LocalClient.Create starting
	I0110 08:59:08.868883  226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem
	I0110 08:59:08.868918  226492 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:08.868933  226492 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:08.868978  226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem
	I0110 08:59:08.869002  226492 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:08.869013  226492 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:08.869403  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:59:08.885872  226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:59:08.885961  226492 network_create.go:284] running [docker network inspect force-systemd-flag-573381] to gather additional debugging logs...
	I0110 08:59:08.885976  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381
	W0110 08:59:08.905316  226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 returned with exit code 1
	I0110 08:59:08.905422  226492 network_create.go:287] error running [docker network inspect force-systemd-flag-573381]: docker network inspect force-systemd-flag-573381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-573381 not found
	I0110 08:59:08.905445  226492 network_create.go:289] output of [docker network inspect force-systemd-flag-573381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-573381 not found
	
	** /stderr **
	I0110 08:59:08.905535  226492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:08.924865  226492 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1cad6f167682 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:2e:00:65:f8:e1} reservation:<nil>}
	I0110 08:59:08.925148  226492 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-470266542ec0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:41:d2:db:7c:3c} reservation:<nil>}
	I0110 08:59:08.925444  226492 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed6e044af825 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:1d:61:47:90:b1} reservation:<nil>}
	I0110 08:59:08.925750  226492 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-322c731839f0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:f9:1c:29:7d:48} reservation:<nil>}
	I0110 08:59:08.926117  226492 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a88410}
	I0110 08:59:08.926138  226492 network_create.go:124] attempt to create docker network force-systemd-flag-573381 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 08:59:08.926194  226492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-573381 force-systemd-flag-573381
	I0110 08:59:09.004073  226492 network_create.go:108] docker network force-systemd-flag-573381 192.168.85.0/24 created
	I0110 08:59:09.004107  226492 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-573381" container
	I0110 08:59:09.004205  226492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:59:09.022515  226492 cli_runner.go:164] Run: docker volume create force-systemd-flag-573381 --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:59:09.042894  226492 oci.go:103] Successfully created a docker volume force-systemd-flag-573381
	I0110 08:59:09.042990  226492 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-573381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --entrypoint /usr/bin/test -v force-systemd-flag-573381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:59:09.628587  226492 oci.go:107] Successfully prepared a docker volume force-systemd-flag-573381
	I0110 08:59:09.628655  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:09.628667  226492 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:59:09.628730  226492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:59:12.873367  226492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.244512326s)
	I0110 08:59:12.873399  226492 kic.go:203] duration metric: took 3.244728311s to extract preloaded images to volume ...
	W0110 08:59:12.873534  226492 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 08:59:12.873643  226492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:59:12.964719  226492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-573381 --name force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-573381 --network force-systemd-flag-573381 --ip 192.168.85.2 --volume force-systemd-flag-573381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 08:59:13.335555  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Running}}
	I0110 08:59:13.363137  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:13.385096  226492 cli_runner.go:164] Run: docker exec force-systemd-flag-573381 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:59:13.441925  226492 oci.go:144] the created container "force-systemd-flag-573381" has a running status.
	I0110 08:59:13.441953  226492 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa...
	I0110 08:59:13.817711  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 08:59:13.817809  226492 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:59:13.849514  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:13.876467  226492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:59:13.876490  226492 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-573381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:59:13.967478  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:14.002485  226492 machine.go:94] provisionDockerMachine start ...
	I0110 08:59:14.002580  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:14.031458  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:14.031817  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:14.031827  226492 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:59:14.032463  226492 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55256->127.0.0.1:33002: read: connection reset by peer
	I0110 08:59:17.189005  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
	
	I0110 08:59:17.189034  226492 ubuntu.go:182] provisioning hostname "force-systemd-flag-573381"
	I0110 08:59:17.189096  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:17.213646  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:17.213955  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:17.213988  226492 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-573381 && echo "force-systemd-flag-573381" | sudo tee /etc/hostname
	I0110 08:59:17.393000  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
	
	I0110 08:59:17.393073  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:17.417619  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:17.417930  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:17.417946  226492 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-573381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-573381/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-573381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:59:17.577322  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:59:17.577379  226492 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2299/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2299/.minikube}
	I0110 08:59:17.577405  226492 ubuntu.go:190] setting up certificates
	I0110 08:59:17.577415  226492 provision.go:84] configureAuth start
	I0110 08:59:17.577472  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:17.603411  226492 provision.go:143] copyHostCerts
	I0110 08:59:17.603458  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:17.603498  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem, removing ...
	I0110 08:59:17.603505  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:17.603594  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem (1082 bytes)
	I0110 08:59:17.603679  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:17.603697  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem, removing ...
	I0110 08:59:17.603701  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:17.603727  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem (1123 bytes)
	I0110 08:59:17.603777  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:17.603792  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem, removing ...
	I0110 08:59:17.603796  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:17.603818  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem (1679 bytes)
	I0110 08:59:17.603870  226492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-573381 san=[127.0.0.1 192.168.85.2 force-systemd-flag-573381 localhost minikube]
	I0110 08:59:18.101227  226492 provision.go:177] copyRemoteCerts
	I0110 08:59:18.101309  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:59:18.101374  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.120236  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:18.228191  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 08:59:18.228270  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 08:59:18.252222  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 08:59:18.252289  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 08:59:18.276205  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 08:59:18.276272  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:59:18.301241  226492 provision.go:87] duration metric: took 723.793723ms to configureAuth
	I0110 08:59:18.301273  226492 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:59:18.301552  226492 config.go:182] Loaded profile config "force-systemd-flag-573381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:18.301635  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.333060  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.333475  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.333499  226492 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 08:59:18.486799  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 08:59:18.486870  226492 ubuntu.go:71] root file system type: overlay
	I0110 08:59:18.487027  226492 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 08:59:18.487127  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.522677  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.522986  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.523069  226492 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 08:59:18.721846  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 08:59:18.721925  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.755481  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.755783  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.755815  226492 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 08:59:19.935990  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 08:59:18.711684691 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 08:59:19.936016  226492 machine.go:97] duration metric: took 5.933508422s to provisionDockerMachine
	I0110 08:59:19.936028  226492 client.go:176] duration metric: took 11.067209235s to LocalClient.Create
	I0110 08:59:19.936041  226492 start.go:167] duration metric: took 11.067259614s to libmachine.API.Create "force-systemd-flag-573381"
	I0110 08:59:19.936049  226492 start.go:293] postStartSetup for "force-systemd-flag-573381" (driver="docker")
	I0110 08:59:19.936059  226492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:59:19.936120  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:59:19.936159  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:19.958962  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.074923  226492 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:59:20.079159  226492 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:59:20.079189  226492 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:59:20.079201  226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/addons for local assets ...
	I0110 08:59:20.079266  226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/files for local assets ...
	I0110 08:59:20.079356  226492 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> 40942.pem in /etc/ssl/certs
	I0110 08:59:20.079369  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /etc/ssl/certs/40942.pem
	I0110 08:59:20.079482  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:59:20.088316  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:20.110932  226492 start.go:296] duration metric: took 174.869214ms for postStartSetup
	I0110 08:59:20.111307  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:20.129061  226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
	I0110 08:59:20.129339  226492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:59:20.129450  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.146816  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.266379  226492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:59:20.271370  226492 start.go:128] duration metric: took 11.406310013s to createHost
	I0110 08:59:20.271395  226492 start.go:83] releasing machines lock for "force-systemd-flag-573381", held for 11.406430793s
	I0110 08:59:20.271464  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:20.288793  226492 ssh_runner.go:195] Run: cat /version.json
	I0110 08:59:20.288851  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.289074  226492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:59:20.289133  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.322735  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.334868  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.541956  226492 ssh_runner.go:195] Run: systemctl --version
	I0110 08:59:20.549992  226492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:59:20.556905  226492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:59:20.556995  226492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:59:20.586357  226492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 08:59:20.586433  226492 start.go:496] detecting cgroup driver to use...
	I0110 08:59:20.586462  226492 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:20.586586  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:20.601310  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 08:59:20.610472  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 08:59:20.619345  226492 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 08:59:20.619503  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 08:59:20.631919  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:20.640858  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 08:59:20.650267  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:20.659888  226492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:59:20.668204  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 08:59:20.677415  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 08:59:20.688627  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 08:59:20.697816  226492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:59:20.705665  226492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:59:20.713436  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:20.851878  226492 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 08:59:20.975957  226492 start.go:496] detecting cgroup driver to use...
	I0110 08:59:20.976035  226492 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:20.976120  226492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 08:59:20.994585  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:21.015980  226492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:59:21.047963  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:21.061003  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 08:59:21.076487  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:21.092674  226492 ssh_runner.go:195] Run: which cri-dockerd
	I0110 08:59:21.096718  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 08:59:21.104845  226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 08:59:21.119518  226492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 08:59:21.267305  226492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 08:59:21.412794  226492 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 08:59:21.412940  226492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 08:59:21.428668  226492 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 08:59:21.442271  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:21.585985  226492 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 08:59:22.079009  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:59:22.093689  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 08:59:22.109192  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:22.124141  226492 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 08:59:22.285826  226492 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 08:59:22.470044  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:22.631147  226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 08:59:22.649887  226492 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 08:59:22.664808  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:22.817595  226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 08:59:22.901926  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:22.921322  226492 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 08:59:22.921557  226492 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 08:59:22.926346  226492 start.go:574] Will wait 60s for crictl version
	I0110 08:59:22.926464  226492 ssh_runner.go:195] Run: which crictl
	I0110 08:59:22.930949  226492 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:59:22.967399  226492 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 08:59:22.967545  226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:23.013575  226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:23.047281  226492 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 08:59:23.047431  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:23.066948  226492 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 08:59:23.071229  226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:23.080762  226492 kubeadm.go:884] updating cluster {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:59:23.080873  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:23.080927  226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:23.099976  226492 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:23.099997  226492 docker.go:624] Images already preloaded, skipping extraction
	I0110 08:59:23.100066  226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:23.131172  226492 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:23.131194  226492 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:59:23.131204  226492 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0110 08:59:23.131305  226492 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-573381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:59:23.131368  226492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 08:59:23.199852  226492 cni.go:84] Creating CNI manager for ""
	I0110 08:59:23.199937  226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:23.199990  226492 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:59:23.200028  226492 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-573381 NodeName:force-systemd-flag-573381 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:59:23.200180  226492 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-573381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:59:23.200298  226492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:59:23.208388  226492 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:59:23.208452  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:59:23.216341  226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0110 08:59:23.229196  226492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:59:23.241814  226492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0110 08:59:23.255178  226492 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:59:23.258978  226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:23.269270  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:23.403518  226492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:59:23.422001  226492 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381 for IP: 192.168.85.2
	I0110 08:59:23.422072  226492 certs.go:195] generating shared ca certs ...
	I0110 08:59:23.422112  226492 certs.go:227] acquiring lock for ca certs: {Name:mk8055241a73ed80e6751b465b7d27c66c028c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.422308  226492 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key
	I0110 08:59:23.422375  226492 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key
	I0110 08:59:23.422398  226492 certs.go:257] generating profile certs ...
	I0110 08:59:23.422483  226492 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key
	I0110 08:59:23.422517  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt with IP's: []
	I0110 08:59:23.559653  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt ...
	I0110 08:59:23.559734  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt: {Name:mkcd1531c8c1d18ccd6c5fe039b9f1900cfb2c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.559957  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key ...
	I0110 08:59:23.559993  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key: {Name:mk3d7418d4f308035237fc3f9abca77e176904a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.560151  226492 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88
	I0110 08:59:23.560193  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 08:59:23.877470  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 ...
	I0110 08:59:23.877541  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88: {Name:mk908398532d92633125c591bd292afec3cf2db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.877769  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 ...
	I0110 08:59:23.877802  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88: {Name:mk0bd2f2259a70d86d7ac055c0b2e17ebe7e9105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.877941  226492 certs.go:382] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt
	I0110 08:59:23.878079  226492 certs.go:386] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key
	I0110 08:59:23.878168  226492 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key
	I0110 08:59:23.878219  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt with IP's: []
	I0110 08:59:24.034669  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt ...
	I0110 08:59:24.034725  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt: {Name:mkf9293bc335f7385742865bf35c11d43e999969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:24.034928  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key ...
	I0110 08:59:24.034967  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key: {Name:mk223176e848184d582c970ee99983183f6c07ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:24.035099  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 08:59:24.035145  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 08:59:24.035175  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 08:59:24.035220  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 08:59:24.035255  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 08:59:24.035287  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 08:59:24.035332  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 08:59:24.035369  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 08:59:24.035459  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem (1338 bytes)
	W0110 08:59:24.035533  226492 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094_empty.pem, impossibly tiny 0 bytes
	I0110 08:59:24.035575  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:59:24.035631  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem (1082 bytes)
	I0110 08:59:24.035696  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:59:24.035747  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem (1679 bytes)
	I0110 08:59:24.035834  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:24.035892  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.035944  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.035978  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem -> /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.036583  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:59:24.054568  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:59:24.076860  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:59:24.095940  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:59:24.118670  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 08:59:24.138950  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:59:24.164637  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:59:24.184864  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:59:24.209003  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /usr/share/ca-certificates/40942.pem (1708 bytes)
	I0110 08:59:24.231074  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:59:24.255151  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem --> /usr/share/ca-certificates/4094.pem (1338 bytes)
	I0110 08:59:24.278140  226492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:59:24.295203  226492 ssh_runner.go:195] Run: openssl version
	I0110 08:59:24.301854  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.309375  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40942.pem /etc/ssl/certs/40942.pem
	I0110 08:59:24.318264  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.324783  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:26 /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.324855  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.372891  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:24.381791  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40942.pem /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:24.390489  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.398828  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:59:24.407913  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.412372  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.412452  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.473682  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:59:24.483843  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:59:24.492073  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.499221  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4094.pem /etc/ssl/certs/4094.pem
	I0110 08:59:24.506976  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.511249  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:26 /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.511365  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.552759  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:59:24.560412  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4094.pem /etc/ssl/certs/51391683.0
	I0110 08:59:24.569027  226492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:59:24.572610  226492 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:59:24.572709  226492 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:24.572848  226492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 08:59:24.589197  226492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:59:24.597153  226492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:59:24.604880  226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:59:24.604945  226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:59:24.612455  226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:59:24.612476  226492 kubeadm.go:158] found existing configuration files:
	
	I0110 08:59:24.612556  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:59:24.620165  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:59:24.620239  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:59:24.627301  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:59:24.634604  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:59:24.634677  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:59:24.642012  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.649868  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:59:24.649940  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.657446  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:59:24.665005  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:59:24.665080  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:59:24.672497  226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:59:24.714035  226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:59:24.714212  226492 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:59:24.792424  226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:59:24.792577  226492 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 08:59:24.792652  226492 kubeadm.go:319] OS: Linux
	I0110 08:59:24.792740  226492 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:59:24.792828  226492 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 08:59:24.792903  226492 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:59:24.792981  226492 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:59:24.793060  226492 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:59:24.793140  226492 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:59:24.793217  226492 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:59:24.793293  226492 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:59:24.793408  226492 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 08:59:24.860290  226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:59:24.860468  226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:59:24.860597  226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:59:24.877774  226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:59:24.883916  226492 out.go:252]   - Generating certificates and keys ...
	I0110 08:59:24.884078  226492 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:59:24.884189  226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:59:25.017207  226492 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:59:25.505301  226492 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:59:25.598478  226492 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:59:25.907160  226492 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:59:26.177844  226492 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:59:26.178499  226492 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 08:59:26.496023  226492 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:59:26.496358  226492 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 08:59:26.690002  226492 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:59:27.036356  226492 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:59:27.401186  226492 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:59:27.401511  226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:59:27.640969  226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:59:27.949614  226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:59:28.312484  226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:59:28.649712  226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:59:29.128888  226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:59:29.129663  226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:59:29.133359  226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:59:29.136985  226492 out.go:252]   - Booting up control plane ...
	I0110 08:59:29.137093  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:59:29.137176  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:59:29.138118  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:59:29.186624  226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:59:29.186957  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:59:29.196035  226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:59:29.196583  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:59:29.196880  226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:59:29.335046  226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:59:29.335207  226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:03:29.334568  226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279732s
	I0110 09:03:29.334620  226492 kubeadm.go:319] 
	I0110 09:03:29.334691  226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:03:29.334725  226492 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:03:29.334838  226492 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:03:29.334843  226492 kubeadm.go:319] 
	I0110 09:03:29.334951  226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:03:29.334987  226492 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:03:29.335018  226492 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:03:29.335022  226492 kubeadm.go:319] 
	I0110 09:03:29.338362  226492 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:29.338843  226492 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:29.339001  226492 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:29.339272  226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:03:29.339282  226492 kubeadm.go:319] 
	I0110 09:03:29.339450  226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:03:29.339564  226492 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:03:29.339670  226492 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 09:03:29.762408  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:03:29.775647  226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:03:29.775764  226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:03:29.783284  226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:03:29.783304  226492 kubeadm.go:158] found existing configuration files:
	
	I0110 09:03:29.783360  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:03:29.790865  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:03:29.790931  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:03:29.798651  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:03:29.806487  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:03:29.806554  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:03:29.813908  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.821677  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:03:29.821788  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.829171  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:03:29.836791  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:03:29.836888  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:03:29.844589  226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:03:29.883074  226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:03:29.883137  226492 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:03:29.995124  226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:03:29.995217  226492 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:03:29.995281  226492 kubeadm.go:319] OS: Linux
	I0110 09:03:29.995380  226492 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:03:29.995459  226492 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:03:29.995536  226492 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:03:29.995609  226492 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:03:29.995686  226492 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:03:29.995789  226492 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:03:29.995871  226492 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:03:29.995956  226492 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:03:29.996048  226492 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:03:30.094129  226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:03:30.094508  226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:03:30.094661  226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:03:30.113829  226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:03:30.119048  226492 out.go:252]   - Generating certificates and keys ...
	I0110 09:03:30.119164  226492 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:03:30.119263  226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:03:30.119389  226492 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:03:30.119469  226492 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:03:30.119568  226492 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:03:30.119637  226492 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:03:30.119720  226492 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:03:30.119798  226492 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:03:30.119888  226492 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:03:30.119990  226492 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:03:30.120045  226492 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:03:30.120121  226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:03:30.292257  226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:03:30.550762  226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:03:30.719598  226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:03:30.988775  226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:03:31.135675  226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:03:31.136918  226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:03:31.141259  226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:03:31.144663  226492 out.go:252]   - Booting up control plane ...
	I0110 09:03:31.144774  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:03:31.144862  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:03:31.145855  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:03:31.166964  226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:03:31.167098  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:03:31.174610  226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:03:31.175019  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:03:31.175233  226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:03:31.309599  226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:03:31.309777  226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:31.312855  226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001072153s
	I0110 09:07:31.312882  226492 kubeadm.go:319] 
	I0110 09:07:31.312939  226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:31.312973  226492 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:31.313078  226492 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:31.313082  226492 kubeadm.go:319] 
	I0110 09:07:31.313187  226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:31.313219  226492 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:31.313250  226492 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:31.313254  226492 kubeadm.go:319] 
	I0110 09:07:31.318635  226492 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:07:31.319089  226492 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:07:31.319205  226492 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:07:31.319497  226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 09:07:31.319504  226492 kubeadm.go:319] 
	I0110 09:07:31.319742  226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:31.319823  226492 kubeadm.go:403] duration metric: took 8m6.74711775s to StartCluster
	I0110 09:07:31.319867  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:31.319926  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:31.381554  226492 cri.go:96] found id: ""
	I0110 09:07:31.381641  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.381665  226492 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:31.381700  226492 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 09:07:31.381782  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:31.418897  226492 cri.go:96] found id: ""
	I0110 09:07:31.418972  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.418995  226492 logs.go:284] No container was found matching "etcd"
	I0110 09:07:31.419016  226492 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 09:07:31.419107  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:31.483500  226492 cri.go:96] found id: ""
	I0110 09:07:31.483590  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.483614  226492 logs.go:284] No container was found matching "coredns"
	I0110 09:07:31.483658  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:31.483764  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:31.520794  226492 cri.go:96] found id: ""
	I0110 09:07:31.520827  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.520837  226492 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:31.520844  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:31.520902  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:31.566878  226492 cri.go:96] found id: ""
	I0110 09:07:31.566900  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.566909  226492 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:31.566915  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:31.566979  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:31.606008  226492 cri.go:96] found id: ""
	I0110 09:07:31.606036  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.606045  226492 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:31.606052  226492 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:31.606109  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:31.638251  226492 cri.go:96] found id: ""
	I0110 09:07:31.638279  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.638288  226492 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:31.638298  226492 logs.go:123] Gathering logs for container status ...
	I0110 09:07:31.638310  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:31.692020  226492 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:31.692050  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:31.772520  226492 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:31.772553  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:31.787224  226492 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:31.787256  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:31.865808  226492 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:31.857753    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.858695    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860406    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860729    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.862213    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:31.857753    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.858695    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860406    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860729    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.862213    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 09:07:31.865836  226492 logs.go:123] Gathering logs for Docker ...
	I0110 09:07:31.865851  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0110 09:07:31.891246  226492 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:31.891318  226492 out.go:285] * 
	* 
	W0110 09:07:31.891479  226492 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.891501  226492 out.go:285] * 
	* 
	W0110 09:07:31.891823  226492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:31.898116  226492 out.go:203] 
	W0110 09:07:31.901143  226492 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.901204  226492 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:31.901332  226492 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:31.905173  226492 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-573381 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 09:07:32.591239659 +0000 UTC m=+2835.060327617
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-573381
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-573381:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532",
	        "Created": "2026-01-10T08:59:12.979495423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:59:13.052217169Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/hosts",
	        "LogPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532-json.log",
	        "Name": "/force-systemd-flag-573381",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-573381:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-573381",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532",
	                "LowerDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837-init/diff:/var/lib/docker/overlay2/248ee347a986ccd1655df91e733f088b104cf9846d12889b06882322d682136d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-573381",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-573381/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-573381",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-573381",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-573381",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e263f37d0636bb7dd6f57071b275162fbbc28f89c393acec7c1f5f7f2bb51cd",
	            "SandboxKey": "/var/run/docker/netns/1e263f37d063",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33002"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-573381": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:96:39:c4:d6:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7be823994696e8ebc8abb040bfaf9ab6d9ad8cae383b9c947fcf664e6de8f1b6",
	                    "EndpointID": "3a9247ceaf35a07d27563c2530bf248b53caeb3db10bfb9883d8b5508563a5b7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-573381",
	                        "ca7fb38a7663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-573381 -n force-systemd-flag-573381
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-573381 -n force-systemd-flag-573381: exit status 6 (397.390025ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:32.994400  239446 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-573381" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-573381 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-632912 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                              │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo docker system info                                                                                                                                                                                                            │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo containerd config dump                                                                                                                                                                                                        │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo crio config                                                                                                                                                                                                                   │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ delete  │ -p cilium-632912                                                                                                                                                                                                                                    │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ 10 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                       │ force-systemd-flag-573381 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ force-systemd-env-861581 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                 │ force-systemd-env-861581  │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
	│ delete  │ -p force-systemd-env-861581                                                                                                                                                                                                                         │ force-systemd-env-861581  │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
	│ start   │ -p docker-flags-543601 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ docker-flags-543601       │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │                     │
	│ ssh     │ force-systemd-flag-573381 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                │ force-systemd-flag-573381 │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:07:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:07:30.577954  238959 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:07:30.578077  238959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:07:30.578098  238959 out.go:374] Setting ErrFile to fd 2...
	I0110 09:07:30.578103  238959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:07:30.578356  238959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 09:07:30.578782  238959 out.go:368] Setting JSON to false
	I0110 09:07:30.579618  238959 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3003,"bootTime":1768033048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 09:07:30.579691  238959 start.go:143] virtualization:  
	I0110 09:07:30.583609  238959 out.go:179] * [docker-flags-543601] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:07:30.588137  238959 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:07:30.588195  238959 notify.go:221] Checking for updates...
	I0110 09:07:30.591563  238959 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:07:30.594993  238959 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 09:07:30.598201  238959 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 09:07:30.601415  238959 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:07:30.604533  238959 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:07:30.608067  238959 config.go:182] Loaded profile config "force-systemd-flag-573381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 09:07:30.608279  238959 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:07:30.642980  238959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:07:30.643091  238959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:07:30.742610  238959 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:07:30.733011256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:07:30.742716  238959 docker.go:319] overlay module found
	I0110 09:07:30.746007  238959 out.go:179] * Using the docker driver based on user configuration
	I0110 09:07:30.748902  238959 start.go:309] selected driver: docker
	I0110 09:07:30.748924  238959 start.go:928] validating driver "docker" against <nil>
	I0110 09:07:30.748937  238959 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:07:30.749743  238959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:07:30.799716  238959 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:07:30.79046822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:07:30.799863  238959 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:07:30.800089  238959 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0110 09:07:30.803032  238959 out.go:179] * Using Docker driver with root privileges
	I0110 09:07:30.805977  238959 cni.go:84] Creating CNI manager for ""
	I0110 09:07:30.806059  238959 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 09:07:30.806074  238959 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 09:07:30.806162  238959 start.go:353] cluster config:
	{Name:docker-flags-543601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-543601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:07:30.809341  238959 out.go:179] * Starting "docker-flags-543601" primary control-plane node in "docker-flags-543601" cluster
	I0110 09:07:30.812218  238959 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 09:07:30.815434  238959 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:07:30.818384  238959 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 09:07:30.818453  238959 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 09:07:30.818467  238959 cache.go:65] Caching tarball of preloaded images
	I0110 09:07:30.818475  238959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:07:30.818557  238959 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 09:07:30.818567  238959 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 09:07:30.818677  238959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/docker-flags-543601/config.json ...
	I0110 09:07:30.818694  238959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/docker-flags-543601/config.json: {Name:mkc9ce1b0b1e8d58c1796eb0043a2540bdcf4784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:07:30.838350  238959 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:07:30.838373  238959 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:07:30.838391  238959 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:07:30.838421  238959 start.go:360] acquireMachinesLock for docker-flags-543601: {Name:mk04825a748eadeee6f551dea778247eb4fd7a21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:07:30.838530  238959 start.go:364] duration metric: took 89.322µs to acquireMachinesLock for "docker-flags-543601"
	I0110 09:07:30.838559  238959 start.go:93] Provisioning new machine with config: &{Name:docker-flags-543601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-543601 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 09:07:30.838626  238959 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:07:31.312855  226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001072153s
	I0110 09:07:31.312882  226492 kubeadm.go:319] 
	I0110 09:07:31.312939  226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:31.312973  226492 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:31.313078  226492 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:31.313082  226492 kubeadm.go:319] 
	I0110 09:07:31.313187  226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:31.313219  226492 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:31.313250  226492 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:31.313254  226492 kubeadm.go:319] 
	I0110 09:07:31.318635  226492 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:07:31.319089  226492 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:07:31.319205  226492 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:07:31.319497  226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 09:07:31.319504  226492 kubeadm.go:319] 
	I0110 09:07:31.319742  226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:31.319823  226492 kubeadm.go:403] duration metric: took 8m6.74711775s to StartCluster
	I0110 09:07:31.319867  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:31.319926  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:31.381554  226492 cri.go:96] found id: ""
	I0110 09:07:31.381641  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.381665  226492 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:31.381700  226492 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 09:07:31.381782  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:31.418897  226492 cri.go:96] found id: ""
	I0110 09:07:31.418972  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.418995  226492 logs.go:284] No container was found matching "etcd"
	I0110 09:07:31.419016  226492 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 09:07:31.419107  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:31.483500  226492 cri.go:96] found id: ""
	I0110 09:07:31.483590  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.483614  226492 logs.go:284] No container was found matching "coredns"
	I0110 09:07:31.483658  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:31.483764  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:31.520794  226492 cri.go:96] found id: ""
	I0110 09:07:31.520827  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.520837  226492 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:31.520844  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:31.520902  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:31.566878  226492 cri.go:96] found id: ""
	I0110 09:07:31.566900  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.566909  226492 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:31.566915  226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:31.566979  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:31.606008  226492 cri.go:96] found id: ""
	I0110 09:07:31.606036  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.606045  226492 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:31.606052  226492 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:31.606109  226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:31.638251  226492 cri.go:96] found id: ""
	I0110 09:07:31.638279  226492 logs.go:282] 0 containers: []
	W0110 09:07:31.638288  226492 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:31.638298  226492 logs.go:123] Gathering logs for container status ...
	I0110 09:07:31.638310  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:31.692020  226492 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:31.692050  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:31.772520  226492 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:31.772553  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:31.787224  226492 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:31.787256  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:31.865808  226492 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:31.857753    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.858695    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860406    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860729    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.862213    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:31.857753    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.858695    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860406    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.860729    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.862213    5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 09:07:31.865836  226492 logs.go:123] Gathering logs for Docker ...
	I0110 09:07:31.865851  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0110 09:07:31.891246  226492 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:31.891318  226492 out.go:285] * 
	W0110 09:07:31.891479  226492 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.891501  226492 out.go:285] * 
	W0110 09:07:31.891823  226492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:31.898116  226492 out.go:203] 
	W0110 09:07:31.901143  226492 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001072153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.901204  226492 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:31.901332  226492 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:31.905173  226492 out.go:203] 
	
	
	==> Docker <==
	Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.818754131Z" level=info msg="Restoring containers: start."
	Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.833730578Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.853745151Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.018904601Z" level=info msg="Loading containers: done."
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036117918Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036191157Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036233603Z" level=info msg="Initializing buildkit"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.070227559Z" level=info msg="Completed buildkit initialization"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.075723002Z" level=info msg="Daemon has completed initialization"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.075799671Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.080306083Z" level=info msg="API listen on /run/docker.sock"
	Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.080412505Z" level=info msg="API listen on [::]:2376"
	Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Start docker client with request timeout 0s"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Loaded network plugin cni"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Setting cgroupDriver systemd"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:33.665933    5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.666625    5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.668194    5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.668715    5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.670219    5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014340] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.489012] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033977] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807327] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.189402] kauditd_printk_skb: 36 callbacks suppressed
	[Jan10 08:46] hrtimer: interrupt took 42078579 ns
	
	
	==> kernel <==
	 09:07:33 up 50 min,  0 user,  load average: 0.83, 1.12, 1.78
	Linux force-systemd-flag-573381 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 09:07:29 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:30 force-systemd-flag-573381 kubelet[5537]: E0110 09:07:30.722485    5537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:31 force-systemd-flag-573381 kubelet[5561]: E0110 09:07:31.495793    5561 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-flag-573381 kubelet[5630]: E0110 09:07:32.322107    5630 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:33 force-systemd-flag-573381 kubelet[5678]: E0110 09:07:33.276274    5678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-573381 -n force-systemd-flag-573381
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-573381 -n force-systemd-flag-573381: exit status 6 (346.781733ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:34.153543  239680 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-573381" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-573381" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-573381" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-573381
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-573381: (2.09211065s)
--- FAIL: TestForceSystemdFlag (507.82s)

                                                
                                    
x
+
TestForceSystemdEnv (507.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-861581 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-861581 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m23.620124872s)

                                                
                                                
-- stdout --
	* [force-systemd-env-861581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-861581" primary control-plane node in "force-systemd-env-861581" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:59:02.987132  225391 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:59:02.987453  225391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:02.987479  225391 out.go:374] Setting ErrFile to fd 2...
	I0110 08:59:02.987500  225391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:02.987795  225391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:59:02.988252  225391 out.go:368] Setting JSON to false
	I0110 08:59:02.989546  225391 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2495,"bootTime":1768033048,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:59:02.989644  225391 start.go:143] virtualization:  
	I0110 08:59:02.993109  225391 out.go:179] * [force-systemd-env-861581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:59:02.997555  225391 notify.go:221] Checking for updates...
	I0110 08:59:03.004946  225391 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:59:03.008464  225391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:59:03.011546  225391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:59:03.014617  225391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:59:03.017703  225391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:59:03.020682  225391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0110 08:59:03.023881  225391 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:59:03.066948  225391 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:59:03.067061  225391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:03.151355  225391 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 08:59:03.140408584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:03.151464  225391 docker.go:319] overlay module found
	I0110 08:59:03.154618  225391 out.go:179] * Using the docker driver based on user configuration
	I0110 08:59:03.157698  225391 start.go:309] selected driver: docker
	I0110 08:59:03.157718  225391 start.go:928] validating driver "docker" against <nil>
	I0110 08:59:03.157731  225391 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:59:03.158415  225391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:03.238214  225391 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 08:59:03.229337085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:03.238380  225391 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:59:03.238613  225391 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:59:03.241525  225391 out.go:179] * Using Docker driver with root privileges
	I0110 08:59:03.244356  225391 cni.go:84] Creating CNI manager for ""
	I0110 08:59:03.244437  225391 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:03.244452  225391 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 08:59:03.244529  225391 start.go:353] cluster config:
	{Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:03.247538  225391 out.go:179] * Starting "force-systemd-env-861581" primary control-plane node in "force-systemd-env-861581" cluster
	I0110 08:59:03.250336  225391 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 08:59:03.253312  225391 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:59:03.256167  225391 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:03.256213  225391 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 08:59:03.256229  225391 cache.go:65] Caching tarball of preloaded images
	I0110 08:59:03.256321  225391 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 08:59:03.256337  225391 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 08:59:03.256676  225391 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/config.json ...
	I0110 08:59:03.256704  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/config.json: {Name:mk0f5347dacb44d16c8f5947d5833082d636f3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:03.256870  225391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:59:03.287573  225391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:59:03.287599  225391 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:59:03.287617  225391 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:59:03.287662  225391 start.go:360] acquireMachinesLock for force-systemd-env-861581: {Name:mk0f99e59877e1792360d4551946d717b0ab5d4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:59:03.287765  225391 start.go:364] duration metric: took 82.454µs to acquireMachinesLock for "force-systemd-env-861581"
	I0110 08:59:03.287794  225391 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 08:59:03.287862  225391 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:59:03.291129  225391 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:59:03.291378  225391 start.go:159] libmachine.API.Create for "force-systemd-env-861581" (driver="docker")
	I0110 08:59:03.291416  225391 client.go:173] LocalClient.Create starting
	I0110 08:59:03.291508  225391 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem
	I0110 08:59:03.291556  225391 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:03.291579  225391 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:03.291630  225391 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem
	I0110 08:59:03.291652  225391 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:03.291665  225391 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:03.292039  225391 cli_runner.go:164] Run: docker network inspect force-systemd-env-861581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:59:03.322942  225391 cli_runner.go:211] docker network inspect force-systemd-env-861581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:59:03.323032  225391 network_create.go:284] running [docker network inspect force-systemd-env-861581] to gather additional debugging logs...
	I0110 08:59:03.323049  225391 cli_runner.go:164] Run: docker network inspect force-systemd-env-861581
	W0110 08:59:03.352809  225391 cli_runner.go:211] docker network inspect force-systemd-env-861581 returned with exit code 1
	I0110 08:59:03.352840  225391 network_create.go:287] error running [docker network inspect force-systemd-env-861581]: docker network inspect force-systemd-env-861581: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-861581 not found
	I0110 08:59:03.352853  225391 network_create.go:289] output of [docker network inspect force-systemd-env-861581]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-861581 not found
	
	** /stderr **
	I0110 08:59:03.352961  225391 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:03.383232  225391 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1cad6f167682 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:2e:00:65:f8:e1} reservation:<nil>}
	I0110 08:59:03.383485  225391 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-470266542ec0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:41:d2:db:7c:3c} reservation:<nil>}
	I0110 08:59:03.383769  225391 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed6e044af825 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:1d:61:47:90:b1} reservation:<nil>}
	I0110 08:59:03.384108  225391 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019993e0}
	I0110 08:59:03.384131  225391 network_create.go:124] attempt to create docker network force-systemd-env-861581 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 08:59:03.384187  225391 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-861581 force-systemd-env-861581
	I0110 08:59:03.451573  225391 network_create.go:108] docker network force-systemd-env-861581 192.168.76.0/24 created
	I0110 08:59:03.451600  225391 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-861581" container
	I0110 08:59:03.451685  225391 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:59:03.468442  225391 cli_runner.go:164] Run: docker volume create force-systemd-env-861581 --label name.minikube.sigs.k8s.io=force-systemd-env-861581 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:59:03.487400  225391 oci.go:103] Successfully created a docker volume force-systemd-env-861581
	I0110 08:59:03.487493  225391 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-861581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-861581 --entrypoint /usr/bin/test -v force-systemd-env-861581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:59:04.076123  225391 oci.go:107] Successfully prepared a docker volume force-systemd-env-861581
	I0110 08:59:04.076193  225391 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:04.076202  225391 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:59:04.076278  225391 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-861581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:59:07.521146  225391 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-861581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.44482443s)
	I0110 08:59:07.521180  225391 kic.go:203] duration metric: took 3.444974052s to extract preloaded images to volume ...
	W0110 08:59:07.521319  225391 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 08:59:07.521579  225391 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:59:07.613114  225391 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-861581 --name force-systemd-env-861581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-861581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-861581 --network force-systemd-env-861581 --ip 192.168.76.2 --volume force-systemd-env-861581:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 08:59:07.967343  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Running}}
	I0110 08:59:08.009549  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.056957  225391 cli_runner.go:164] Run: docker exec force-systemd-env-861581 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:59:08.189520  225391 oci.go:144] the created container "force-systemd-env-861581" has a running status.
	I0110 08:59:08.189596  225391 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa...
	I0110 08:59:08.268629  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 08:59:08.268715  225391 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:59:08.320052  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.382973  225391 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:59:08.382993  225391 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-861581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:59:08.467273  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.521591  225391 machine.go:94] provisionDockerMachine start ...
	I0110 08:59:08.521684  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:08.560918  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:08.561270  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:08.561287  225391 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:59:08.562000  225391 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 08:59:11.716936  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-861581
	
	I0110 08:59:11.716964  225391 ubuntu.go:182] provisioning hostname "force-systemd-env-861581"
	I0110 08:59:11.717027  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:11.736571  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:11.736890  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:11.736907  225391 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-861581 && echo "force-systemd-env-861581" | sudo tee /etc/hostname
	I0110 08:59:11.897006  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-861581
	
	I0110 08:59:11.897084  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:11.915165  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:11.916016  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:11.916040  225391 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-861581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-861581/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-861581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:59:12.069516  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:59:12.069586  225391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2299/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2299/.minikube}
	I0110 08:59:12.069626  225391 ubuntu.go:190] setting up certificates
	I0110 08:59:12.069636  225391 provision.go:84] configureAuth start
	I0110 08:59:12.069697  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:12.087654  225391 provision.go:143] copyHostCerts
	I0110 08:59:12.087701  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:12.087733  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem, removing ...
	I0110 08:59:12.087743  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:12.087822  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem (1082 bytes)
	I0110 08:59:12.087906  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:12.087927  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem, removing ...
	I0110 08:59:12.087936  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:12.087966  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem (1123 bytes)
	I0110 08:59:12.088015  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:12.088038  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem, removing ...
	I0110 08:59:12.088046  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:12.088071  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem (1679 bytes)
	I0110 08:59:12.088120  225391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-861581 san=[127.0.0.1 192.168.76.2 force-systemd-env-861581 localhost minikube]
	I0110 08:59:12.857670  225391 provision.go:177] copyRemoteCerts
	I0110 08:59:12.857747  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:59:12.857794  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:12.886978  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:12.999971  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 08:59:13.000032  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 08:59:13.024986  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 08:59:13.025049  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 08:59:13.058689  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 08:59:13.058768  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:59:13.086699  225391 provision.go:87] duration metric: took 1.017037968s to configureAuth
	I0110 08:59:13.086727  225391 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:59:13.086896  225391 config.go:182] Loaded profile config "force-systemd-env-861581": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:13.086952  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.105444  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.105761  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.105778  225391 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 08:59:13.277968  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 08:59:13.277992  225391 ubuntu.go:71] root file system type: overlay
	I0110 08:59:13.278147  225391 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 08:59:13.278238  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.308466  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.308769  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.308845  225391 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 08:59:13.534879  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 08:59:13.534968  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.584626  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.584980  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.585004  225391 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 08:59:14.887350  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 08:59:13.523637922 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 08:59:14.887382  225391 machine.go:97] duration metric: took 6.36577334s to provisionDockerMachine
	I0110 08:59:14.887395  225391 client.go:176] duration metric: took 11.595970327s to LocalClient.Create
	I0110 08:59:14.887407  225391 start.go:167] duration metric: took 11.596031267s to libmachine.API.Create "force-systemd-env-861581"
	I0110 08:59:14.887414  225391 start.go:293] postStartSetup for "force-systemd-env-861581" (driver="docker")
	I0110 08:59:14.887425  225391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:59:14.887491  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:59:14.887543  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:14.903934  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.012696  225391 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:59:15.028927  225391 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:59:15.028962  225391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:59:15.028977  225391 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/addons for local assets ...
	I0110 08:59:15.029060  225391 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/files for local assets ...
	I0110 08:59:15.029164  225391 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> 40942.pem in /etc/ssl/certs
	I0110 08:59:15.029179  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /etc/ssl/certs/40942.pem
	I0110 08:59:15.029297  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:59:15.038200  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:15.057979  225391 start.go:296] duration metric: took 170.550295ms for postStartSetup
	I0110 08:59:15.058380  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:15.076128  225391 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/config.json ...
	I0110 08:59:15.076425  225391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:59:15.076474  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.094586  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.194782  225391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:59:15.200128  225391 start.go:128] duration metric: took 11.912250731s to createHost
	I0110 08:59:15.200153  225391 start.go:83] releasing machines lock for "force-systemd-env-861581", held for 11.912375106s
	I0110 08:59:15.200222  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:15.217294  225391 ssh_runner.go:195] Run: cat /version.json
	I0110 08:59:15.217478  225391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:59:15.217549  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.217550  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.235126  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.236278  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.435440  225391 ssh_runner.go:195] Run: systemctl --version
	I0110 08:59:15.443064  225391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:59:15.447479  225391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:59:15.447558  225391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:59:15.475208  225391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 08:59:15.475283  225391 start.go:496] detecting cgroup driver to use...
	I0110 08:59:15.475315  225391 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:15.475457  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:15.489122  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 08:59:15.497941  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 08:59:15.506581  225391 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 08:59:15.506700  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 08:59:15.515313  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:15.524196  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 08:59:15.532638  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:15.540905  225391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:59:15.548584  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 08:59:15.557133  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 08:59:15.565729  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 08:59:15.574603  225391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:59:15.581887  225391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:59:15.588746  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:15.695968  225391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 08:59:15.790378  225391 start.go:496] detecting cgroup driver to use...
	I0110 08:59:15.790407  225391 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:15.790463  225391 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 08:59:15.806611  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:15.819580  225391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:59:15.841928  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:15.854744  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 08:59:15.867804  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:15.881334  225391 ssh_runner.go:195] Run: which cri-dockerd
	I0110 08:59:15.885018  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 08:59:15.892436  225391 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 08:59:15.905051  225391 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 08:59:16.011243  225391 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 08:59:16.129796  225391 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 08:59:16.129949  225391 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 08:59:16.143916  225391 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 08:59:16.156220  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:16.272514  225391 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 08:59:16.684258  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:59:16.697425  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 08:59:16.710586  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:16.723645  225391 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 08:59:16.853869  225391 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 08:59:16.969808  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.111276  225391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 08:59:17.127155  225391 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 08:59:17.143618  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.297673  225391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 08:59:17.372533  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:17.387012  225391 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 08:59:17.387077  225391 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 08:59:17.392705  225391 start.go:574] Will wait 60s for crictl version
	I0110 08:59:17.392771  225391 ssh_runner.go:195] Run: which crictl
	I0110 08:59:17.405795  225391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:59:17.435907  225391 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 08:59:17.435978  225391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:17.460682  225391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:17.498709  225391 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 08:59:17.498797  225391 cli_runner.go:164] Run: docker network inspect force-systemd-env-861581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:17.518177  225391 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 08:59:17.521968  225391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:17.531613  225391 kubeadm.go:884] updating cluster {Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:59:17.531724  225391 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:17.531778  225391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:17.550956  225391 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:17.550977  225391 docker.go:624] Images already preloaded, skipping extraction
	I0110 08:59:17.551037  225391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:17.568431  225391 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:17.568451  225391 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:59:17.568460  225391 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0110 08:59:17.568558  225391 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-861581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:59:17.568632  225391 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 08:59:17.650813  225391 cni.go:84] Creating CNI manager for ""
	I0110 08:59:17.650856  225391 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:17.650886  225391 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:59:17.650906  225391 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-861581 NodeName:force-systemd-env-861581 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:59:17.651053  225391 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-861581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:59:17.651133  225391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:59:17.660994  225391 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:59:17.661066  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:59:17.669501  225391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0110 08:59:17.686901  225391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:59:17.702726  225391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 08:59:17.717818  225391 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:59:17.722071  225391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:17.733206  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.879466  225391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:59:17.896450  225391 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581 for IP: 192.168.76.2
	I0110 08:59:17.896469  225391 certs.go:195] generating shared ca certs ...
	I0110 08:59:17.896484  225391 certs.go:227] acquiring lock for ca certs: {Name:mk8055241a73ed80e6751b465b7d27c66c028c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:17.896615  225391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key
	I0110 08:59:17.896663  225391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key
	I0110 08:59:17.896671  225391 certs.go:257] generating profile certs ...
	I0110 08:59:17.896724  225391 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key
	I0110 08:59:17.896734  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt with IP's: []
	I0110 08:59:18.156940  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt ...
	I0110 08:59:18.157011  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt: {Name:mk1d644f117041e605f61f9a0109f3255cf0a378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.157243  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key ...
	I0110 08:59:18.157282  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key: {Name:mk9c983eb3342b148732ea944d4f341316b8496c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.157431  225391 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024
	I0110 08:59:18.157471  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 08:59:18.623221  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 ...
	I0110 08:59:18.623299  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024: {Name:mk5f35e2e231e80473024277bf9b28bda21db8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.623532  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024 ...
	I0110 08:59:18.623572  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024: {Name:mke8502b301fe1eaefd3ecf5175774a0c6977987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.623707  225391 certs.go:382] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt
	I0110 08:59:18.623826  225391 certs.go:386] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key
	I0110 08:59:18.623912  225391 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key
	I0110 08:59:18.623961  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt with IP's: []
	I0110 08:59:18.843520  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt ...
	I0110 08:59:18.843585  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt: {Name:mk473d90909ba38637b5738291c41c2829876ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.843780  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key ...
	I0110 08:59:18.843826  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key: {Name:mka7d0a6f5dae045b56116fa1e52e8b6f33982b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.843956  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 08:59:18.844014  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 08:59:18.844045  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 08:59:18.844077  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 08:59:18.844114  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 08:59:18.844147  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 08:59:18.844189  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 08:59:18.844251  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 08:59:18.844330  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem (1338 bytes)
	W0110 08:59:18.844411  225391 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094_empty.pem, impossibly tiny 0 bytes
	I0110 08:59:18.844445  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:59:18.844497  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem (1082 bytes)
	I0110 08:59:18.844549  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:59:18.844593  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem (1679 bytes)
	I0110 08:59:18.844666  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:18.844716  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /usr/share/ca-certificates/40942.pem
	I0110 08:59:18.844753  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:18.844785  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem -> /usr/share/ca-certificates/4094.pem
	I0110 08:59:18.845321  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:59:18.863682  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:59:18.882262  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:59:18.903027  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:59:18.924914  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 08:59:18.944997  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:59:18.971030  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:59:19.005071  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:59:19.026470  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /usr/share/ca-certificates/40942.pem (1708 bytes)
	I0110 08:59:19.046299  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:59:19.065759  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem --> /usr/share/ca-certificates/4094.pem (1338 bytes)
	I0110 08:59:19.084589  225391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:59:19.098722  225391 ssh_runner.go:195] Run: openssl version
	I0110 08:59:19.104656  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.123473  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40942.pem /etc/ssl/certs/40942.pem
	I0110 08:59:19.134729  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.144190  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:26 /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.144276  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.186872  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:19.195411  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40942.pem /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:19.203476  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.211556  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:59:19.219494  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.223737  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.223822  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.274354  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:59:19.286142  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:59:19.299107  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.308444  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4094.pem /etc/ssl/certs/4094.pem
	I0110 08:59:19.316014  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.320081  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:26 /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.320154  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.363597  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:59:19.372441  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4094.pem /etc/ssl/certs/51391683.0
	I0110 08:59:19.380290  225391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:59:19.384791  225391 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:59:19.384892  225391 kubeadm.go:401] StartCluster: {Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:19.385082  225391 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 08:59:19.430853  225391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:59:19.446599  225391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:59:19.466413  225391 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:59:19.466514  225391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:59:19.488057  225391 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:59:19.488126  225391 kubeadm.go:158] found existing configuration files:
	
	I0110 08:59:19.488209  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:59:19.501218  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:59:19.501329  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:59:19.513574  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:59:19.522958  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:59:19.523076  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:59:19.531066  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:59:19.542959  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:59:19.543075  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:59:19.552051  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:59:19.561757  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:59:19.561870  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:59:19.570721  225391 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:59:19.617724  225391 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:59:19.617941  225391 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:59:19.713552  225391 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:59:19.713672  225391 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 08:59:19.713763  225391 kubeadm.go:319] OS: Linux
	I0110 08:59:19.713844  225391 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:59:19.713925  225391 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 08:59:19.714003  225391 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:59:19.714084  225391 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:59:19.714164  225391 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:59:19.714245  225391 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:59:19.714322  225391 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:59:19.714403  225391 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:59:19.714492  225391 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 08:59:19.796527  225391 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:59:19.796693  225391 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:59:19.796816  225391 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:59:19.818202  225391 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:59:19.823882  225391 out.go:252]   - Generating certificates and keys ...
	I0110 08:59:19.823974  225391 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:59:19.824054  225391 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:59:19.986091  225391 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:59:20.196887  225391 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:59:20.465190  225391 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:59:20.561747  225391 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:59:20.641139  225391 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:59:20.641461  225391 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:20.901319  225391 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:59:20.901553  225391 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:21.168646  225391 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:59:21.987096  225391 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:59:22.529820  225391 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:59:22.532257  225391 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:59:22.973473  225391 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:59:23.631013  225391 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:59:23.965834  225391 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:59:24.017523  225391 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:59:24.210843  225391 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:59:24.210957  225391 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:59:24.213651  225391 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:59:24.219731  225391 out.go:252]   - Booting up control plane ...
	I0110 08:59:24.219846  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:59:24.219940  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:59:24.220011  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:59:24.255368  225391 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:59:24.255481  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:59:24.270529  225391 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:59:24.270635  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:59:24.270680  225391 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:59:24.457003  225391 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:59:24.457138  225391 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:03:24.459506  225391 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002824481s
	I0110 09:03:24.459548  225391 kubeadm.go:319] 
	I0110 09:03:24.459612  225391 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:03:24.459651  225391 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:03:24.459760  225391 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:03:24.459766  225391 kubeadm.go:319] 
	I0110 09:03:24.459870  225391 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:03:24.459902  225391 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:03:24.459947  225391 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:03:24.459956  225391 kubeadm.go:319] 
	I0110 09:03:24.465112  225391 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:24.465556  225391 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:24.465665  225391 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:24.465899  225391 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:03:24.465905  225391 kubeadm.go:319] 
	I0110 09:03:24.465973  225391 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:03:24.466095  225391 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002824481s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002824481s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:03:24.466174  225391 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 09:03:24.893626  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:03:24.906961  225391 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:03:24.907022  225391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:03:24.914984  225391 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:03:24.915051  225391 kubeadm.go:158] found existing configuration files:
	
	I0110 09:03:24.915107  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:03:24.922564  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:03:24.922631  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:03:24.929801  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:03:24.937325  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:03:24.937407  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:03:24.944505  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:03:24.952260  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:03:24.952330  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:03:24.959894  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:03:24.967585  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:03:24.967694  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:03:24.975150  225391 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:03:25.098938  225391 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:25.099421  225391 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:25.179732  225391 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:07:26.065192  225391 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:07:26.065226  225391 kubeadm.go:319] 
	I0110 09:07:26.065310  225391 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:26.069589  225391 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:07:26.069662  225391 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:07:26.069750  225391 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:07:26.069814  225391 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:07:26.069847  225391 kubeadm.go:319] OS: Linux
	I0110 09:07:26.069890  225391 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:07:26.069945  225391 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:07:26.069994  225391 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:07:26.070043  225391 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:07:26.070092  225391 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:07:26.070147  225391 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:07:26.070190  225391 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:07:26.070244  225391 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:07:26.070287  225391 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:07:26.070370  225391 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:07:26.070462  225391 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:07:26.070546  225391 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:07:26.070608  225391 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:07:26.076518  225391 out.go:252]   - Generating certificates and keys ...
	I0110 09:07:26.076620  225391 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:07:26.076690  225391 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:07:26.076769  225391 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:07:26.076831  225391 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:07:26.076902  225391 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:07:26.076958  225391 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:07:26.077018  225391 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:07:26.077080  225391 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:07:26.077154  225391 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:07:26.077227  225391 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:07:26.077267  225391 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:07:26.077324  225391 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:07:26.077388  225391 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:07:26.077446  225391 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:07:26.077499  225391 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:07:26.077559  225391 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:07:26.077610  225391 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:07:26.077691  225391 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:07:26.077753  225391 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:07:26.080581  225391 out.go:252]   - Booting up control plane ...
	I0110 09:07:26.080705  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:07:26.080793  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:07:26.080867  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:07:26.080970  225391 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:07:26.081061  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:07:26.081199  225391 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:07:26.081366  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:07:26.081412  225391 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:07:26.081548  225391 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:07:26.081667  225391 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:26.081751  225391 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001204335s
	I0110 09:07:26.081760  225391 kubeadm.go:319] 
	I0110 09:07:26.081826  225391 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:26.081859  225391 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:26.081966  225391 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:26.081971  225391 kubeadm.go:319] 
	I0110 09:07:26.082073  225391 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:26.082123  225391 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:26.082153  225391 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:26.082184  225391 kubeadm.go:319] 
	I0110 09:07:26.082213  225391 kubeadm.go:403] duration metric: took 8m6.697325361s to StartCluster
	I0110 09:07:26.082247  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:26.082328  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:26.116604  225391 cri.go:96] found id: ""
	I0110 09:07:26.116638  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.116647  225391 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:26.116654  225391 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 09:07:26.116715  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:26.141414  225391 cri.go:96] found id: ""
	I0110 09:07:26.141437  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.141445  225391 logs.go:284] No container was found matching "etcd"
	I0110 09:07:26.141452  225391 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 09:07:26.141509  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:26.210270  225391 cri.go:96] found id: ""
	I0110 09:07:26.210292  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.210300  225391 logs.go:284] No container was found matching "coredns"
	I0110 09:07:26.210307  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:26.210364  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:26.245411  225391 cri.go:96] found id: ""
	I0110 09:07:26.245433  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.245441  225391 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:26.245447  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:26.245504  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:26.273289  225391 cri.go:96] found id: ""
	I0110 09:07:26.273311  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.273319  225391 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:26.273326  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:26.273411  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:26.298888  225391 cri.go:96] found id: ""
	I0110 09:07:26.298917  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.298926  225391 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:26.298934  225391 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:26.298990  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:26.323177  225391 cri.go:96] found id: ""
	I0110 09:07:26.323202  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.323212  225391 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:26.323234  225391 logs.go:123] Gathering logs for Docker ...
	I0110 09:07:26.323244  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 09:07:26.345736  225391 logs.go:123] Gathering logs for container status ...
	I0110 09:07:26.345767  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:26.376613  225391 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:26.376639  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:26.434314  225391 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:26.434350  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:26.449986  225391 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:26.450013  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:26.516534  225391 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:26.508407    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.509205    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.510762    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.511108    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.512576    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:26.508407    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.509205    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.510762    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.511108    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.512576    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 09:07:26.516559  225391 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:26.516592  225391 out.go:285] * 
	* 
	W0110 09:07:26.516640  225391 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:26.516658  225391 out.go:285] * 
	* 
	W0110 09:07:26.516906  225391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:26.523726  225391 out.go:203] 
	W0110 09:07:26.526649  225391 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:26.526712  225391 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:26.526732  225391 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:26.529870  225391 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-861581 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-861581 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-10 09:07:27.122588006 +0000 UTC m=+2829.591675964
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-861581
helpers_test.go:244: (dbg) docker inspect force-systemd-env-861581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509",
	        "Created": "2026-01-10T08:59:07.633028607Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226049,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:59:07.698800331Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509/hosts",
	        "LogPath": "/var/lib/docker/containers/ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509/ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509-json.log",
	        "Name": "/force-systemd-env-861581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-861581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-861581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea559ab3c3902d9a91c9bd016e37570420ef14f331e4bd4b9fe17f8e37ead509",
	                "LowerDir": "/var/lib/docker/overlay2/08ed80d9ae0aa4dee92fe2594907adae4e5d00e3f027f196c6cbc5b785338799-init/diff:/var/lib/docker/overlay2/248ee347a986ccd1655df91e733f088b104cf9846d12889b06882322d682136d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08ed80d9ae0aa4dee92fe2594907adae4e5d00e3f027f196c6cbc5b785338799/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08ed80d9ae0aa4dee92fe2594907adae4e5d00e3f027f196c6cbc5b785338799/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08ed80d9ae0aa4dee92fe2594907adae4e5d00e3f027f196c6cbc5b785338799/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-861581",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-861581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-861581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-861581",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-861581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5550e2c1818fdc04af43c1994288720d7882476a7817504600d46aeba7400dbf",
	            "SandboxKey": "/var/run/docker/netns/5550e2c1818f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32997"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32998"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33001"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32999"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33000"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-861581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:8c:72:25:6b:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "322c731839f0a0ab4686647bc7705b24a3a5cb0427a390b43829dc4de9fe490c",
	                    "EndpointID": "6002e1add57c5f7f9ffd10b8a74804aa94afbf0800c120f3a22682dbe5c2cc13",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-861581",
	                        "ea559ab3c390"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-861581 -n force-systemd-env-861581
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-861581 -n force-systemd-env-861581: exit status 6 (342.769338ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:27.478878  238395 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-861581" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-861581 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-632912 sudo systemctl cat kubelet --no-pager                                                                        │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo journalctl -xeu kubelet --all --full --no-pager                                                         │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/kubernetes/kubelet.conf                                                                        │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /var/lib/kubelet/config.yaml                                                                        │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat docker --no-pager                                                                         │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/docker/daemon.json                                                                             │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo docker system info                                                                                      │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cri-dockerd --version                                                                                   │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat containerd --no-pager                                                                     │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo cat /etc/containerd/config.toml                                                                         │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo containerd config dump                                                                                  │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo systemctl cat crio --no-pager                                                                           │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ -p cilium-632912 sudo crio config                                                                                             │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ delete  │ -p cilium-632912                                                                                                              │ cilium-632912             │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ 10 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-573381 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │                     │
	│ ssh     │ force-systemd-env-861581 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-861581  │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:59:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:59:08.534283  226492 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:59:08.534423  226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:08.534431  226492 out.go:374] Setting ErrFile to fd 2...
	I0110 08:59:08.534436  226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:08.534716  226492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:59:08.535124  226492 out.go:368] Setting JSON to false
	I0110 08:59:08.535945  226492 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2501,"bootTime":1768033048,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:59:08.536012  226492 start.go:143] virtualization:  
	I0110 08:59:08.540087  226492 out.go:179] * [force-systemd-flag-573381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:59:08.544100  226492 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:59:08.544407  226492 notify.go:221] Checking for updates...
	I0110 08:59:08.550278  226492 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:59:08.553314  226492 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:59:08.556418  226492 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:59:08.559460  226492 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:59:08.562977  226492 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:59:08.566347  226492 config.go:182] Loaded profile config "force-systemd-env-861581": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:08.566466  226492 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:59:08.606961  226492 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:59:08.607065  226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:08.717444  226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.708565746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:08.717541  226492 docker.go:319] overlay module found
	I0110 08:59:08.721224  226492 out.go:179] * Using the docker driver based on user configuration
	I0110 08:59:08.724306  226492 start.go:309] selected driver: docker
	I0110 08:59:08.724327  226492 start.go:928] validating driver "docker" against <nil>
	I0110 08:59:08.724341  226492 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:59:08.724965  226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:08.818940  226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.808836061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:08.819091  226492 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:59:08.819299  226492 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:59:08.821663  226492 out.go:179] * Using Docker driver with root privileges
	I0110 08:59:08.824752  226492 cni.go:84] Creating CNI manager for ""
	I0110 08:59:08.824824  226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:08.824834  226492 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 08:59:08.824913  226492 start.go:353] cluster config:
	{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:08.829730  226492 out.go:179] * Starting "force-systemd-flag-573381" primary control-plane node in "force-systemd-flag-573381" cluster
	I0110 08:59:08.832988  226492 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 08:59:08.835974  226492 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:59:08.838696  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:08.838738  226492 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 08:59:08.838749  226492 cache.go:65] Caching tarball of preloaded images
	I0110 08:59:08.838829  226492 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 08:59:08.838837  226492 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 08:59:08.838952  226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
	I0110 08:59:08.838969  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json: {Name:mk792ad7b15ee4a35e6dcc78722d34e91cdf2a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:08.839095  226492 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:59:08.864802  226492 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:59:08.864821  226492 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:59:08.864835  226492 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:59:08.864865  226492 start.go:360] acquireMachinesLock for force-systemd-flag-573381: {Name:mk03eb5fbb2bba12d438b336944081d9ef274656 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:59:08.864956  226492 start.go:364] duration metric: took 76.341µs to acquireMachinesLock for "force-systemd-flag-573381"
	I0110 08:59:08.864979  226492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 08:59:08.865046  226492 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:59:08.009549  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.056957  225391 cli_runner.go:164] Run: docker exec force-systemd-env-861581 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:59:08.189520  225391 oci.go:144] the created container "force-systemd-env-861581" has a running status.
	I0110 08:59:08.189596  225391 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa...
	I0110 08:59:08.268629  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 08:59:08.268715  225391 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:59:08.320052  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.382973  225391 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:59:08.382993  225391 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-861581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:59:08.467273  225391 cli_runner.go:164] Run: docker container inspect force-systemd-env-861581 --format={{.State.Status}}
	I0110 08:59:08.521591  225391 machine.go:94] provisionDockerMachine start ...
	I0110 08:59:08.521684  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:08.560918  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:08.561270  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:08.561287  225391 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:59:08.562000  225391 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 08:59:11.716936  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-861581
	
	I0110 08:59:11.716964  225391 ubuntu.go:182] provisioning hostname "force-systemd-env-861581"
	I0110 08:59:11.717027  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:11.736571  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:11.736890  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:11.736907  225391 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-861581 && echo "force-systemd-env-861581" | sudo tee /etc/hostname
	I0110 08:59:11.897006  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-861581
	
	I0110 08:59:11.897084  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:11.915165  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:11.916016  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:11.916040  225391 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-861581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-861581/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-861581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:59:12.069516  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:59:12.069586  225391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2299/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2299/.minikube}
	I0110 08:59:12.069626  225391 ubuntu.go:190] setting up certificates
	I0110 08:59:12.069636  225391 provision.go:84] configureAuth start
	I0110 08:59:12.069697  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:12.087654  225391 provision.go:143] copyHostCerts
	I0110 08:59:12.087701  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:12.087733  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem, removing ...
	I0110 08:59:12.087743  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:12.087822  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem (1082 bytes)
	I0110 08:59:12.087906  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:12.087927  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem, removing ...
	I0110 08:59:12.087936  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:12.087966  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem (1123 bytes)
	I0110 08:59:12.088015  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:12.088038  225391 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem, removing ...
	I0110 08:59:12.088046  225391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:12.088071  225391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem (1679 bytes)
	I0110 08:59:12.088120  225391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-861581 san=[127.0.0.1 192.168.76.2 force-systemd-env-861581 localhost minikube]
	I0110 08:59:12.857670  225391 provision.go:177] copyRemoteCerts
	I0110 08:59:12.857747  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:59:12.857794  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:12.886978  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:08.868543  226492 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:59:08.868782  226492 start.go:159] libmachine.API.Create for "force-systemd-flag-573381" (driver="docker")
	I0110 08:59:08.868812  226492 client.go:173] LocalClient.Create starting
	I0110 08:59:08.868883  226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem
	I0110 08:59:08.868918  226492 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:08.868933  226492 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:08.868978  226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem
	I0110 08:59:08.869002  226492 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:08.869013  226492 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:08.869403  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:59:08.885872  226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:59:08.885961  226492 network_create.go:284] running [docker network inspect force-systemd-flag-573381] to gather additional debugging logs...
	I0110 08:59:08.885976  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381
	W0110 08:59:08.905316  226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 returned with exit code 1
	I0110 08:59:08.905422  226492 network_create.go:287] error running [docker network inspect force-systemd-flag-573381]: docker network inspect force-systemd-flag-573381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-573381 not found
	I0110 08:59:08.905445  226492 network_create.go:289] output of [docker network inspect force-systemd-flag-573381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-573381 not found
	
	** /stderr **
	I0110 08:59:08.905535  226492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:08.924865  226492 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1cad6f167682 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:2e:00:65:f8:e1} reservation:<nil>}
	I0110 08:59:08.925148  226492 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-470266542ec0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:41:d2:db:7c:3c} reservation:<nil>}
	I0110 08:59:08.925444  226492 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed6e044af825 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:1d:61:47:90:b1} reservation:<nil>}
	I0110 08:59:08.925750  226492 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-322c731839f0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:f9:1c:29:7d:48} reservation:<nil>}
	I0110 08:59:08.926117  226492 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a88410}
	I0110 08:59:08.926138  226492 network_create.go:124] attempt to create docker network force-systemd-flag-573381 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 08:59:08.926194  226492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-573381 force-systemd-flag-573381
	I0110 08:59:09.004073  226492 network_create.go:108] docker network force-systemd-flag-573381 192.168.85.0/24 created
	I0110 08:59:09.004107  226492 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-573381" container
	I0110 08:59:09.004205  226492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:59:09.022515  226492 cli_runner.go:164] Run: docker volume create force-systemd-flag-573381 --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:59:09.042894  226492 oci.go:103] Successfully created a docker volume force-systemd-flag-573381
	I0110 08:59:09.042990  226492 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-573381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --entrypoint /usr/bin/test -v force-systemd-flag-573381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:59:09.628587  226492 oci.go:107] Successfully prepared a docker volume force-systemd-flag-573381
	I0110 08:59:09.628655  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:09.628667  226492 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:59:09.628730  226492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:59:12.873367  226492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.244512326s)
	I0110 08:59:12.873399  226492 kic.go:203] duration metric: took 3.244728311s to extract preloaded images to volume ...
	W0110 08:59:12.873534  226492 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 08:59:12.873643  226492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:59:12.964719  226492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-573381 --name force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-573381 --network force-systemd-flag-573381 --ip 192.168.85.2 --volume force-systemd-flag-573381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 08:59:13.335555  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Running}}
	I0110 08:59:13.363137  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:13.385096  226492 cli_runner.go:164] Run: docker exec force-systemd-flag-573381 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:59:13.441925  226492 oci.go:144] the created container "force-systemd-flag-573381" has a running status.
	I0110 08:59:13.441953  226492 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa...
	I0110 08:59:12.999971  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 08:59:13.000032  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 08:59:13.024986  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 08:59:13.025049  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 08:59:13.058689  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 08:59:13.058768  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:59:13.086699  225391 provision.go:87] duration metric: took 1.017037968s to configureAuth
	I0110 08:59:13.086727  225391 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:59:13.086896  225391 config.go:182] Loaded profile config "force-systemd-env-861581": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:13.086952  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.105444  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.105761  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.105778  225391 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 08:59:13.277968  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 08:59:13.277992  225391 ubuntu.go:71] root file system type: overlay
	I0110 08:59:13.278147  225391 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 08:59:13.278238  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.308466  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.308769  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.308845  225391 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 08:59:13.534879  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 08:59:13.534968  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:13.584626  225391 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:13.584980  225391 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0110 08:59:13.585004  225391 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 08:59:14.887350  225391 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 08:59:13.523637922 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 08:59:14.887382  225391 machine.go:97] duration metric: took 6.36577334s to provisionDockerMachine
	I0110 08:59:14.887395  225391 client.go:176] duration metric: took 11.595970327s to LocalClient.Create
	I0110 08:59:14.887407  225391 start.go:167] duration metric: took 11.596031267s to libmachine.API.Create "force-systemd-env-861581"
	I0110 08:59:14.887414  225391 start.go:293] postStartSetup for "force-systemd-env-861581" (driver="docker")
	I0110 08:59:14.887425  225391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:59:14.887491  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:59:14.887543  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:14.903934  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.012696  225391 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:59:15.028927  225391 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:59:15.028962  225391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:59:15.028977  225391 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/addons for local assets ...
	I0110 08:59:15.029060  225391 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/files for local assets ...
	I0110 08:59:15.029164  225391 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> 40942.pem in /etc/ssl/certs
	I0110 08:59:15.029179  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /etc/ssl/certs/40942.pem
	I0110 08:59:15.029297  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:59:15.038200  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:15.057979  225391 start.go:296] duration metric: took 170.550295ms for postStartSetup
	I0110 08:59:15.058380  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:15.076128  225391 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/config.json ...
	I0110 08:59:15.076425  225391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:59:15.076474  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.094586  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.194782  225391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:59:15.200128  225391 start.go:128] duration metric: took 11.912250731s to createHost
	I0110 08:59:15.200153  225391 start.go:83] releasing machines lock for "force-systemd-env-861581", held for 11.912375106s
	I0110 08:59:15.200222  225391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-861581
	I0110 08:59:15.217294  225391 ssh_runner.go:195] Run: cat /version.json
	I0110 08:59:15.217478  225391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:59:15.217549  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.217550  225391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-861581
	I0110 08:59:15.235126  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.236278  225391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-env-861581/id_rsa Username:docker}
	I0110 08:59:15.435440  225391 ssh_runner.go:195] Run: systemctl --version
	I0110 08:59:15.443064  225391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:59:15.447479  225391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:59:15.447558  225391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:59:15.475208  225391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 08:59:15.475283  225391 start.go:496] detecting cgroup driver to use...
	I0110 08:59:15.475315  225391 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:15.475457  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:15.489122  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 08:59:15.497941  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 08:59:15.506581  225391 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 08:59:15.506700  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 08:59:15.515313  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:15.524196  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 08:59:15.532638  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:15.540905  225391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:59:15.548584  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 08:59:15.557133  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 08:59:15.565729  225391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 08:59:15.574603  225391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:59:15.581887  225391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:59:15.588746  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:15.695968  225391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 08:59:15.790378  225391 start.go:496] detecting cgroup driver to use...
	I0110 08:59:15.790407  225391 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:15.790463  225391 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 08:59:15.806611  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:15.819580  225391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:59:15.841928  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:15.854744  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 08:59:15.867804  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:15.881334  225391 ssh_runner.go:195] Run: which cri-dockerd
	I0110 08:59:15.885018  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 08:59:15.892436  225391 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 08:59:15.905051  225391 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 08:59:16.011243  225391 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 08:59:16.129796  225391 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 08:59:16.129949  225391 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 08:59:16.143916  225391 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 08:59:16.156220  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:16.272514  225391 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 08:59:16.684258  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:59:16.697425  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 08:59:16.710586  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:16.723645  225391 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 08:59:16.853869  225391 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 08:59:16.969808  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.111276  225391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 08:59:17.127155  225391 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 08:59:17.143618  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.297673  225391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 08:59:17.372533  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:17.387012  225391 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 08:59:17.387077  225391 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 08:59:17.392705  225391 start.go:574] Will wait 60s for crictl version
	I0110 08:59:17.392771  225391 ssh_runner.go:195] Run: which crictl
	I0110 08:59:17.405795  225391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:59:17.435907  225391 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 08:59:17.435978  225391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:17.460682  225391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:17.498709  225391 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 08:59:17.498797  225391 cli_runner.go:164] Run: docker network inspect force-systemd-env-861581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:17.518177  225391 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 08:59:17.521968  225391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:17.531613  225391 kubeadm.go:884] updating cluster {Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:59:17.531724  225391 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:17.531778  225391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:17.550956  225391 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:17.550977  225391 docker.go:624] Images already preloaded, skipping extraction
	I0110 08:59:17.551037  225391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:17.568431  225391 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:17.568451  225391 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:59:17.568460  225391 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0110 08:59:17.568558  225391 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-861581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:59:17.568632  225391 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 08:59:17.650813  225391 cni.go:84] Creating CNI manager for ""
	I0110 08:59:17.650856  225391 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:17.650886  225391 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:59:17.650906  225391 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-861581 NodeName:force-systemd-env-861581 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:59:17.651053  225391 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-861581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:59:17.651133  225391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:59:17.660994  225391 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:59:17.661066  225391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:59:17.669501  225391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0110 08:59:17.686901  225391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:59:17.702726  225391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 08:59:17.717818  225391 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:59:17.722071  225391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:17.733206  225391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:17.879466  225391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:59:17.896450  225391 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581 for IP: 192.168.76.2
	I0110 08:59:17.896469  225391 certs.go:195] generating shared ca certs ...
	I0110 08:59:17.896484  225391 certs.go:227] acquiring lock for ca certs: {Name:mk8055241a73ed80e6751b465b7d27c66c028c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:17.896615  225391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key
	I0110 08:59:17.896663  225391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key
	I0110 08:59:17.896671  225391 certs.go:257] generating profile certs ...
	I0110 08:59:17.896724  225391 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key
	I0110 08:59:17.896734  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt with IP's: []
	I0110 08:59:13.817711  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 08:59:13.817809  226492 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:59:13.849514  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:13.876467  226492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:59:13.876490  226492 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-573381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:59:13.967478  226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
	I0110 08:59:14.002485  226492 machine.go:94] provisionDockerMachine start ...
	I0110 08:59:14.002580  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:14.031458  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:14.031817  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:14.031827  226492 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:59:14.032463  226492 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55256->127.0.0.1:33002: read: connection reset by peer
	I0110 08:59:17.189005  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
	
	I0110 08:59:17.189034  226492 ubuntu.go:182] provisioning hostname "force-systemd-flag-573381"
	I0110 08:59:17.189096  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:17.213646  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:17.213955  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:17.213988  226492 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-573381 && echo "force-systemd-flag-573381" | sudo tee /etc/hostname
	I0110 08:59:17.393000  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
	
	I0110 08:59:17.393073  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:17.417619  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:17.417930  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:17.417946  226492 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-573381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-573381/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-573381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:59:17.577322  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:59:17.577379  226492 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2299/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2299/.minikube}
	I0110 08:59:17.577405  226492 ubuntu.go:190] setting up certificates
	I0110 08:59:17.577415  226492 provision.go:84] configureAuth start
	I0110 08:59:17.577472  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:17.603411  226492 provision.go:143] copyHostCerts
	I0110 08:59:17.603458  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:17.603498  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem, removing ...
	I0110 08:59:17.603505  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
	I0110 08:59:17.603594  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem (1082 bytes)
	I0110 08:59:17.603679  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:17.603697  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem, removing ...
	I0110 08:59:17.603701  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
	I0110 08:59:17.603727  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem (1123 bytes)
	I0110 08:59:17.603777  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:17.603792  226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem, removing ...
	I0110 08:59:17.603796  226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
	I0110 08:59:17.603818  226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem (1679 bytes)
	I0110 08:59:17.603870  226492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-573381 san=[127.0.0.1 192.168.85.2 force-systemd-flag-573381 localhost minikube]
	I0110 08:59:18.101227  226492 provision.go:177] copyRemoteCerts
	I0110 08:59:18.101309  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:59:18.101374  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.120236  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:18.228191  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 08:59:18.228270  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 08:59:18.252222  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 08:59:18.252289  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 08:59:18.276205  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 08:59:18.276272  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:59:18.301241  226492 provision.go:87] duration metric: took 723.793723ms to configureAuth
	I0110 08:59:18.301273  226492 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:59:18.301552  226492 config.go:182] Loaded profile config "force-systemd-flag-573381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:59:18.301635  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.333060  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.333475  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.333499  226492 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 08:59:18.486799  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 08:59:18.486870  226492 ubuntu.go:71] root file system type: overlay
	I0110 08:59:18.487027  226492 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 08:59:18.487127  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.522677  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.522986  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.523069  226492 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 08:59:18.156940  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt ...
	I0110 08:59:18.157011  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.crt: {Name:mk1d644f117041e605f61f9a0109f3255cf0a378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.157243  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key ...
	I0110 08:59:18.157282  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/client.key: {Name:mk9c983eb3342b148732ea944d4f341316b8496c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.157431  225391 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024
	I0110 08:59:18.157471  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 08:59:18.623221  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 ...
	I0110 08:59:18.623299  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024: {Name:mk5f35e2e231e80473024277bf9b28bda21db8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.623532  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024 ...
	I0110 08:59:18.623572  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024: {Name:mke8502b301fe1eaefd3ecf5175774a0c6977987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.623707  225391 certs.go:382] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt.92eed024 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt
	I0110 08:59:18.623826  225391 certs.go:386] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key.92eed024 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key
	I0110 08:59:18.623912  225391 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key
	I0110 08:59:18.623961  225391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt with IP's: []
	I0110 08:59:18.843520  225391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt ...
	I0110 08:59:18.843585  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt: {Name:mk473d90909ba38637b5738291c41c2829876ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.843780  225391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key ...
	I0110 08:59:18.843826  225391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key: {Name:mka7d0a6f5dae045b56116fa1e52e8b6f33982b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:18.843956  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 08:59:18.844014  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 08:59:18.844045  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 08:59:18.844077  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 08:59:18.844114  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 08:59:18.844147  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 08:59:18.844189  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 08:59:18.844251  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 08:59:18.844330  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem (1338 bytes)
	W0110 08:59:18.844411  225391 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094_empty.pem, impossibly tiny 0 bytes
	I0110 08:59:18.844445  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:59:18.844497  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem (1082 bytes)
	I0110 08:59:18.844549  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:59:18.844593  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem (1679 bytes)
	I0110 08:59:18.844666  225391 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:18.844716  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /usr/share/ca-certificates/40942.pem
	I0110 08:59:18.844753  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:18.844785  225391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem -> /usr/share/ca-certificates/4094.pem
	I0110 08:59:18.845321  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:59:18.863682  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:59:18.882262  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:59:18.903027  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:59:18.924914  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 08:59:18.944997  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:59:18.971030  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:59:19.005071  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-env-861581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:59:19.026470  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /usr/share/ca-certificates/40942.pem (1708 bytes)
	I0110 08:59:19.046299  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:59:19.065759  225391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem --> /usr/share/ca-certificates/4094.pem (1338 bytes)
	I0110 08:59:19.084589  225391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:59:19.098722  225391 ssh_runner.go:195] Run: openssl version
	I0110 08:59:19.104656  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.123473  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40942.pem /etc/ssl/certs/40942.pem
	I0110 08:59:19.134729  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.144190  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:26 /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.144276  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40942.pem
	I0110 08:59:19.186872  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:19.195411  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40942.pem /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:19.203476  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.211556  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:59:19.219494  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.223737  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.223822  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:19.274354  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:59:19.286142  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:59:19.299107  225391 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.308444  225391 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4094.pem /etc/ssl/certs/4094.pem
	I0110 08:59:19.316014  225391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.320081  225391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:26 /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.320154  225391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4094.pem
	I0110 08:59:19.363597  225391 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:59:19.372441  225391 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4094.pem /etc/ssl/certs/51391683.0
	I0110 08:59:19.380290  225391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:59:19.384791  225391 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:59:19.384892  225391 kubeadm.go:401] StartCluster: {Name:force-systemd-env-861581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-861581 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:19.385082  225391 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 08:59:19.430853  225391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:59:19.446599  225391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:59:19.466413  225391 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:59:19.466514  225391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:59:19.488057  225391 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:59:19.488126  225391 kubeadm.go:158] found existing configuration files:
	
	I0110 08:59:19.488209  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:59:19.501218  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:59:19.501329  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:59:19.513574  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:59:19.522958  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:59:19.523076  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:59:19.531066  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:59:19.542959  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:59:19.543075  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:59:19.552051  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:59:19.561757  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:59:19.561870  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:59:19.570721  225391 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:59:19.617724  225391 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:59:19.617941  225391 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:59:19.713552  225391 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:59:19.713672  225391 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 08:59:19.713763  225391 kubeadm.go:319] OS: Linux
	I0110 08:59:19.713844  225391 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:59:19.713925  225391 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 08:59:19.714003  225391 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:59:19.714084  225391 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:59:19.714164  225391 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:59:19.714245  225391 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:59:19.714322  225391 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:59:19.714403  225391 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:59:19.714492  225391 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 08:59:19.796527  225391 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:59:19.796693  225391 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:59:19.796816  225391 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:59:19.818202  225391 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:59:19.823882  225391 out.go:252]   - Generating certificates and keys ...
	I0110 08:59:19.823974  225391 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:59:19.824054  225391 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:59:19.986091  225391 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:59:20.196887  225391 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:59:20.465190  225391 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:59:20.561747  225391 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:59:20.641139  225391 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:59:20.641461  225391 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:20.901319  225391 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:59:20.901553  225391 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:21.168646  225391 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:59:21.987096  225391 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:59:22.529820  225391 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:59:22.532257  225391 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:59:22.973473  225391 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:59:18.721846  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 08:59:18.721925  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:18.755481  226492 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:18.755783  226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33002 <nil> <nil>}
	I0110 08:59:18.755815  226492 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 08:59:19.935990  226492 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 08:59:18.711684691 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 08:59:19.936016  226492 machine.go:97] duration metric: took 5.933508422s to provisionDockerMachine
	I0110 08:59:19.936028  226492 client.go:176] duration metric: took 11.067209235s to LocalClient.Create
	I0110 08:59:19.936041  226492 start.go:167] duration metric: took 11.067259614s to libmachine.API.Create "force-systemd-flag-573381"
	I0110 08:59:19.936049  226492 start.go:293] postStartSetup for "force-systemd-flag-573381" (driver="docker")
	I0110 08:59:19.936059  226492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:59:19.936120  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:59:19.936159  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:19.958962  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.074923  226492 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:59:20.079159  226492 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:59:20.079189  226492 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:59:20.079201  226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/addons for local assets ...
	I0110 08:59:20.079266  226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/files for local assets ...
	I0110 08:59:20.079356  226492 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> 40942.pem in /etc/ssl/certs
	I0110 08:59:20.079369  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /etc/ssl/certs/40942.pem
	I0110 08:59:20.079482  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:59:20.088316  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:20.110932  226492 start.go:296] duration metric: took 174.869214ms for postStartSetup
	I0110 08:59:20.111307  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:20.129061  226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
	I0110 08:59:20.129339  226492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:59:20.129450  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.146816  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.266379  226492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:59:20.271370  226492 start.go:128] duration metric: took 11.406310013s to createHost
	I0110 08:59:20.271395  226492 start.go:83] releasing machines lock for "force-systemd-flag-573381", held for 11.406430793s
	I0110 08:59:20.271464  226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
	I0110 08:59:20.288793  226492 ssh_runner.go:195] Run: cat /version.json
	I0110 08:59:20.288851  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.289074  226492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:59:20.289133  226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
	I0110 08:59:20.322735  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.334868  226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
	I0110 08:59:20.541956  226492 ssh_runner.go:195] Run: systemctl --version
	I0110 08:59:20.549992  226492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:59:20.556905  226492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:59:20.556995  226492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:59:20.586357  226492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 08:59:20.586433  226492 start.go:496] detecting cgroup driver to use...
	I0110 08:59:20.586462  226492 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:20.586586  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:20.601310  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 08:59:20.610472  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 08:59:20.619345  226492 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 08:59:20.619503  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 08:59:20.631919  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:20.640858  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 08:59:20.650267  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:20.659888  226492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:59:20.668204  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 08:59:20.677415  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 08:59:20.688627  226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 08:59:20.697816  226492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:59:20.705665  226492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:59:20.713436  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:20.851878  226492 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 08:59:20.975957  226492 start.go:496] detecting cgroup driver to use...
	I0110 08:59:20.976035  226492 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:20.976120  226492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 08:59:20.994585  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:21.015980  226492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:59:21.047963  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:59:21.061003  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 08:59:21.076487  226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:21.092674  226492 ssh_runner.go:195] Run: which cri-dockerd
	I0110 08:59:21.096718  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 08:59:21.104845  226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 08:59:21.119518  226492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 08:59:21.267305  226492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 08:59:21.412794  226492 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 08:59:21.412940  226492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 08:59:21.428668  226492 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 08:59:21.442271  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:21.585985  226492 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 08:59:22.079009  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:59:22.093689  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 08:59:22.109192  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:22.124141  226492 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 08:59:22.285826  226492 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 08:59:22.470044  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:22.631147  226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 08:59:22.649887  226492 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 08:59:22.664808  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:22.817595  226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 08:59:22.901926  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 08:59:22.921322  226492 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 08:59:22.921557  226492 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 08:59:22.926346  226492 start.go:574] Will wait 60s for crictl version
	I0110 08:59:22.926464  226492 ssh_runner.go:195] Run: which crictl
	I0110 08:59:22.930949  226492 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:59:22.967399  226492 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 08:59:22.967545  226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:23.013575  226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 08:59:23.047281  226492 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 08:59:23.047431  226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:23.066948  226492 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 08:59:23.071229  226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:23.080762  226492 kubeadm.go:884] updating cluster {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:59:23.080873  226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 08:59:23.080927  226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:23.099976  226492 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:23.099997  226492 docker.go:624] Images already preloaded, skipping extraction
	I0110 08:59:23.100066  226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 08:59:23.131172  226492 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 08:59:23.131194  226492 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:59:23.131204  226492 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0110 08:59:23.131305  226492 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-573381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:59:23.131368  226492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 08:59:23.199852  226492 cni.go:84] Creating CNI manager for ""
	I0110 08:59:23.199937  226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:59:23.199990  226492 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:59:23.200028  226492 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-573381 NodeName:force-systemd-flag-573381 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:59:23.200180  226492 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-573381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:59:23.200298  226492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:59:23.208388  226492 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:59:23.208452  226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:59:23.216341  226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0110 08:59:23.229196  226492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:59:23.241814  226492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0110 08:59:23.255178  226492 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:59:23.258978  226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:23.269270  226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:23.403518  226492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:59:23.422001  226492 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381 for IP: 192.168.85.2
	I0110 08:59:23.422072  226492 certs.go:195] generating shared ca certs ...
	I0110 08:59:23.422112  226492 certs.go:227] acquiring lock for ca certs: {Name:mk8055241a73ed80e6751b465b7d27c66c028c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.422308  226492 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key
	I0110 08:59:23.422375  226492 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key
	I0110 08:59:23.422398  226492 certs.go:257] generating profile certs ...
	I0110 08:59:23.422483  226492 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key
	I0110 08:59:23.422517  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt with IP's: []
	I0110 08:59:23.631013  225391 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:59:23.965834  225391 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:59:24.017523  225391 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:59:24.210843  225391 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:59:24.210957  225391 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:59:24.213651  225391 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:59:23.559653  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt ...
	I0110 08:59:23.559734  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt: {Name:mkcd1531c8c1d18ccd6c5fe039b9f1900cfb2c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.559957  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key ...
	I0110 08:59:23.559993  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key: {Name:mk3d7418d4f308035237fc3f9abca77e176904a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.560151  226492 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88
	I0110 08:59:23.560193  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 08:59:23.877470  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 ...
	I0110 08:59:23.877541  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88: {Name:mk908398532d92633125c591bd292afec3cf2db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.877769  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 ...
	I0110 08:59:23.877802  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88: {Name:mk0bd2f2259a70d86d7ac055c0b2e17ebe7e9105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.877941  226492 certs.go:382] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt
	I0110 08:59:23.878079  226492 certs.go:386] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key
	I0110 08:59:23.878168  226492 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key
	I0110 08:59:23.878219  226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt with IP's: []
	I0110 08:59:24.034669  226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt ...
	I0110 08:59:24.034725  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt: {Name:mkf9293bc335f7385742865bf35c11d43e999969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:24.034928  226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key ...
	I0110 08:59:24.034967  226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key: {Name:mk223176e848184d582c970ee99983183f6c07ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:24.035099  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 08:59:24.035145  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 08:59:24.035175  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 08:59:24.035220  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 08:59:24.035255  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 08:59:24.035287  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 08:59:24.035332  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 08:59:24.035369  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 08:59:24.035459  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem (1338 bytes)
	W0110 08:59:24.035533  226492 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094_empty.pem, impossibly tiny 0 bytes
	I0110 08:59:24.035575  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:59:24.035631  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem (1082 bytes)
	I0110 08:59:24.035696  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:59:24.035747  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem (1679 bytes)
	I0110 08:59:24.035834  226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem (1708 bytes)
	I0110 08:59:24.035892  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.035944  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.035978  226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem -> /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.036583  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:59:24.054568  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:59:24.076860  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:59:24.095940  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:59:24.118670  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 08:59:24.138950  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:59:24.164637  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:59:24.184864  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:59:24.209003  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /usr/share/ca-certificates/40942.pem (1708 bytes)
	I0110 08:59:24.231074  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:59:24.255151  226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem --> /usr/share/ca-certificates/4094.pem (1338 bytes)
	I0110 08:59:24.278140  226492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:59:24.295203  226492 ssh_runner.go:195] Run: openssl version
	I0110 08:59:24.301854  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.309375  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40942.pem /etc/ssl/certs/40942.pem
	I0110 08:59:24.318264  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.324783  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:26 /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.324855  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40942.pem
	I0110 08:59:24.372891  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:24.381791  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40942.pem /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:24.390489  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.398828  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:59:24.407913  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.412372  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.412452  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:24.473682  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:59:24.483843  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:59:24.492073  226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.499221  226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4094.pem /etc/ssl/certs/4094.pem
	I0110 08:59:24.506976  226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.511249  226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:26 /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.511365  226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4094.pem
	I0110 08:59:24.552759  226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:59:24.560412  226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4094.pem /etc/ssl/certs/51391683.0
	I0110 08:59:24.569027  226492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:59:24.572610  226492 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:59:24.572709  226492 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:24.572848  226492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 08:59:24.589197  226492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:59:24.597153  226492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:59:24.604880  226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:59:24.604945  226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:59:24.612455  226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:59:24.612476  226492 kubeadm.go:158] found existing configuration files:
	
	I0110 08:59:24.612556  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:59:24.620165  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:59:24.620239  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:59:24.627301  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:59:24.634604  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:59:24.634677  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:59:24.642012  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.649868  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:59:24.649940  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.657446  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:59:24.665005  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:59:24.665080  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:59:24.672497  226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:59:24.714035  226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:59:24.714212  226492 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:59:24.792424  226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:59:24.792577  226492 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 08:59:24.792652  226492 kubeadm.go:319] OS: Linux
	I0110 08:59:24.792740  226492 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:59:24.792828  226492 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 08:59:24.792903  226492 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:59:24.792981  226492 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:59:24.793060  226492 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:59:24.793140  226492 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:59:24.793217  226492 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:59:24.793293  226492 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:59:24.793408  226492 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 08:59:24.860290  226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:59:24.860468  226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:59:24.860597  226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:59:24.877774  226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:59:24.219731  225391 out.go:252]   - Booting up control plane ...
	I0110 08:59:24.219846  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:59:24.219940  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:59:24.220011  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:59:24.255368  225391 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:59:24.255481  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:59:24.270529  225391 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:59:24.270635  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:59:24.270680  225391 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:59:24.457003  225391 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:59:24.457138  225391 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 08:59:24.883916  226492 out.go:252]   - Generating certificates and keys ...
	I0110 08:59:24.884078  226492 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:59:24.884189  226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:59:25.017207  226492 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:59:25.505301  226492 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:59:25.598478  226492 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:59:25.907160  226492 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:59:26.177844  226492 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:59:26.178499  226492 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 08:59:26.496023  226492 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:59:26.496358  226492 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 08:59:26.690002  226492 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:59:27.036356  226492 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:59:27.401186  226492 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:59:27.401511  226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:59:27.640969  226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:59:27.949614  226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:59:28.312484  226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:59:28.649712  226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:59:29.128888  226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:59:29.129663  226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:59:29.133359  226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:59:29.136985  226492 out.go:252]   - Booting up control plane ...
	I0110 08:59:29.137093  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:59:29.137176  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:59:29.138118  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:59:29.186624  226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:59:29.186957  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:59:29.196035  226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:59:29.196583  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:59:29.196880  226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:59:29.335046  226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:59:29.335207  226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:03:24.459506  225391 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002824481s
	I0110 09:03:24.459548  225391 kubeadm.go:319] 
	I0110 09:03:24.459612  225391 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:03:24.459651  225391 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:03:24.459760  225391 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:03:24.459766  225391 kubeadm.go:319] 
	I0110 09:03:24.459870  225391 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:03:24.459902  225391 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:03:24.459947  225391 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:03:24.459956  225391 kubeadm.go:319] 
	I0110 09:03:24.465112  225391 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:24.465556  225391 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:24.465665  225391 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:24.465899  225391 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:03:24.465905  225391 kubeadm.go:319] 
	I0110 09:03:24.465973  225391 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:03:24.466095  225391 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-861581 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002824481s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:03:24.466174  225391 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 09:03:24.893626  225391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:03:24.906961  225391 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:03:24.907022  225391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:03:24.914984  225391 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:03:24.915051  225391 kubeadm.go:158] found existing configuration files:
	
	I0110 09:03:24.915107  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:03:24.922564  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:03:24.922631  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:03:24.929801  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:03:24.937325  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:03:24.937407  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:03:24.944505  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:03:24.952260  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:03:24.952330  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:03:24.959894  225391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:03:24.967585  225391 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:03:24.967694  225391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:03:24.975150  225391 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:03:25.098938  225391 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:25.099421  225391 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:25.179732  225391 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:29.334568  226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279732s
	I0110 09:03:29.334620  226492 kubeadm.go:319] 
	I0110 09:03:29.334691  226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:03:29.334725  226492 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:03:29.334838  226492 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:03:29.334843  226492 kubeadm.go:319] 
	I0110 09:03:29.334951  226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:03:29.334987  226492 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:03:29.335018  226492 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:03:29.335022  226492 kubeadm.go:319] 
	I0110 09:03:29.338362  226492 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:29.338843  226492 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:29.339001  226492 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:29.339272  226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:03:29.339282  226492 kubeadm.go:319] 
	I0110 09:03:29.339450  226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:03:29.339564  226492 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:03:29.339670  226492 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 09:03:29.762408  226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:03:29.775647  226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:03:29.775764  226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:03:29.783284  226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:03:29.783304  226492 kubeadm.go:158] found existing configuration files:
	
	I0110 09:03:29.783360  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:03:29.790865  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:03:29.790931  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:03:29.798651  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:03:29.806487  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:03:29.806554  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:03:29.813908  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.821677  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:03:29.821788  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.829171  226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:03:29.836791  226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:03:29.836888  226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:03:29.844589  226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:03:29.883074  226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:03:29.883137  226492 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:03:29.995124  226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:03:29.995217  226492 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:03:29.995281  226492 kubeadm.go:319] OS: Linux
	I0110 09:03:29.995380  226492 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:03:29.995459  226492 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:03:29.995536  226492 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:03:29.995609  226492 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:03:29.995686  226492 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:03:29.995789  226492 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:03:29.995871  226492 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:03:29.995956  226492 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:03:29.996048  226492 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:03:30.094129  226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:03:30.094508  226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:03:30.094661  226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:03:30.113829  226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:03:30.119048  226492 out.go:252]   - Generating certificates and keys ...
	I0110 09:03:30.119164  226492 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:03:30.119263  226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:03:30.119389  226492 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:03:30.119469  226492 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:03:30.119568  226492 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:03:30.119637  226492 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:03:30.119720  226492 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:03:30.119798  226492 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:03:30.119888  226492 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:03:30.119990  226492 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:03:30.120045  226492 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:03:30.120121  226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:03:30.292257  226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:03:30.550762  226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:03:30.719598  226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:03:30.988775  226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:03:31.135675  226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:03:31.136918  226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:03:31.141259  226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:03:31.144663  226492 out.go:252]   - Booting up control plane ...
	I0110 09:03:31.144774  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:03:31.144862  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:03:31.145855  226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:03:31.166964  226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:03:31.167098  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:03:31.174610  226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:03:31.175019  226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:03:31.175233  226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:03:31.309599  226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:03:31.309777  226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:26.065192  225391 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:07:26.065226  225391 kubeadm.go:319] 
	I0110 09:07:26.065310  225391 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:26.069589  225391 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:07:26.069662  225391 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:07:26.069750  225391 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:07:26.069814  225391 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:07:26.069847  225391 kubeadm.go:319] OS: Linux
	I0110 09:07:26.069890  225391 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:07:26.069945  225391 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:07:26.069994  225391 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:07:26.070043  225391 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:07:26.070092  225391 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:07:26.070147  225391 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:07:26.070190  225391 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:07:26.070244  225391 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:07:26.070287  225391 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:07:26.070370  225391 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:07:26.070462  225391 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:07:26.070546  225391 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:07:26.070608  225391 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:07:26.076518  225391 out.go:252]   - Generating certificates and keys ...
	I0110 09:07:26.076620  225391 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:07:26.076690  225391 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:07:26.076769  225391 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:07:26.076831  225391 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:07:26.076902  225391 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:07:26.076958  225391 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:07:26.077018  225391 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:07:26.077080  225391 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:07:26.077154  225391 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:07:26.077227  225391 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:07:26.077267  225391 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:07:26.077324  225391 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:07:26.077388  225391 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:07:26.077446  225391 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:07:26.077499  225391 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:07:26.077559  225391 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:07:26.077610  225391 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:07:26.077691  225391 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:07:26.077753  225391 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:07:26.080581  225391 out.go:252]   - Booting up control plane ...
	I0110 09:07:26.080705  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:07:26.080793  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:07:26.080867  225391 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:07:26.080970  225391 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:07:26.081061  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:07:26.081199  225391 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:07:26.081366  225391 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:07:26.081412  225391 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:07:26.081548  225391 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:07:26.081667  225391 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:26.081751  225391 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001204335s
	I0110 09:07:26.081760  225391 kubeadm.go:319] 
	I0110 09:07:26.081826  225391 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:26.081859  225391 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:26.081966  225391 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:26.081971  225391 kubeadm.go:319] 
	I0110 09:07:26.082073  225391 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:26.082123  225391 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:26.082153  225391 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:26.082184  225391 kubeadm.go:319] 
	I0110 09:07:26.082213  225391 kubeadm.go:403] duration metric: took 8m6.697325361s to StartCluster
	I0110 09:07:26.082247  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:26.082328  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:26.116604  225391 cri.go:96] found id: ""
	I0110 09:07:26.116638  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.116647  225391 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:26.116654  225391 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 09:07:26.116715  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:26.141414  225391 cri.go:96] found id: ""
	I0110 09:07:26.141437  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.141445  225391 logs.go:284] No container was found matching "etcd"
	I0110 09:07:26.141452  225391 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 09:07:26.141509  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:26.210270  225391 cri.go:96] found id: ""
	I0110 09:07:26.210292  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.210300  225391 logs.go:284] No container was found matching "coredns"
	I0110 09:07:26.210307  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:26.210364  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:26.245411  225391 cri.go:96] found id: ""
	I0110 09:07:26.245433  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.245441  225391 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:26.245447  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:26.245504  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:26.273289  225391 cri.go:96] found id: ""
	I0110 09:07:26.273311  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.273319  225391 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:26.273326  225391 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:26.273411  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:26.298888  225391 cri.go:96] found id: ""
	I0110 09:07:26.298917  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.298926  225391 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:26.298934  225391 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:26.298990  225391 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:26.323177  225391 cri.go:96] found id: ""
	I0110 09:07:26.323202  225391 logs.go:282] 0 containers: []
	W0110 09:07:26.323212  225391 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:26.323234  225391 logs.go:123] Gathering logs for Docker ...
	I0110 09:07:26.323244  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 09:07:26.345736  225391 logs.go:123] Gathering logs for container status ...
	I0110 09:07:26.345767  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:26.376613  225391 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:26.376639  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:26.434314  225391 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:26.434350  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:26.449986  225391 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:26.450013  225391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:26.516534  225391 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:26.508407    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.509205    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.510762    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.511108    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.512576    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:26.508407    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.509205    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.510762    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.511108    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:26.512576    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 09:07:26.516559  225391 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:26.516592  225391 out.go:285] * 
	W0110 09:07:26.516640  225391 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:26.516658  225391 out.go:285] * 
	W0110 09:07:26.516906  225391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:26.523726  225391 out.go:203] 
	W0110 09:07:26.526649  225391 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204335s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:26.526712  225391 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:26.526732  225391 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:26.529870  225391 out.go:203] 
	
	
	==> Docker <==
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.442156475Z" level=info msg="Restoring containers: start."
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.469906305Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.485881751Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.645496048Z" level=info msg="Loading containers: done."
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.655981131Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.656034416Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.656069206Z" level=info msg="Initializing buildkit"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.672673458Z" level=info msg="Completed buildkit initialization"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.681945919Z" level=info msg="Daemon has completed initialization"
	Jan 10 08:59:16 force-systemd-env-861581 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.684826585Z" level=info msg="API listen on /run/docker.sock"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.685102871Z" level=info msg="API listen on [::]:2376"
	Jan 10 08:59:16 force-systemd-env-861581 dockerd[1140]: time="2026-01-10T08:59:16.685195630Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 10 08:59:17 force-systemd-env-861581 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Start docker client with request timeout 0s"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Loaded network plugin cni"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Setting cgroupDriver systemd"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 10 08:59:17 force-systemd-env-861581 cri-dockerd[1421]: time="2026-01-10T08:59:17Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 10 08:59:17 force-systemd-env-861581 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:28.093191    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:28.094129    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:28.095764    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:28.096403    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:28.097938    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014340] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.489012] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033977] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807327] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.189402] kauditd_printk_skb: 36 callbacks suppressed
	[Jan10 08:46] hrtimer: interrupt took 42078579 ns
	
	
	==> kernel <==
	 09:07:28 up 50 min,  0 user,  load average: 0.72, 1.10, 1.78
	Linux force-systemd-env-861581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 09:07:24 force-systemd-env-861581 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:25 force-systemd-env-861581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 09:07:25 force-systemd-env-861581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:25 force-systemd-env-861581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:25 force-systemd-env-861581 kubelet[5524]: E0110 09:07:25.466444    5524 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:25 force-systemd-env-861581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:25 force-systemd-env-861581 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:26 force-systemd-env-861581 kubelet[5557]: E0110 09:07:26.247249    5557 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:26 force-systemd-env-861581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:27 force-systemd-env-861581 kubelet[5622]: E0110 09:07:27.034310    5622 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:27 force-systemd-env-861581 kubelet[5668]: E0110 09:07:27.758139    5668 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:27 force-systemd-env-861581 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-861581 -n force-systemd-env-861581
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-861581 -n force-systemd-env-861581: exit status 6 (436.155582ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:28.721506  238609 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-861581" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-861581" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-861581" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-861581
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-861581: (1.79261874s)
--- FAIL: TestForceSystemdEnv (507.61s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.99
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 3.41
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 77.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 139.23
29 TestAddons/serial/Volcano 41.62
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.92
35 TestAddons/parallel/Registry 15.84
36 TestAddons/parallel/RegistryCreds 0.71
37 TestAddons/parallel/Ingress 20.4
38 TestAddons/parallel/InspektorGadget 11.79
39 TestAddons/parallel/MetricsServer 5.75
41 TestAddons/parallel/CSI 51.7
42 TestAddons/parallel/Headlamp 16.7
43 TestAddons/parallel/CloudSpanner 6.53
44 TestAddons/parallel/LocalPath 54.32
45 TestAddons/parallel/NvidiaDevicePlugin 6.45
46 TestAddons/parallel/Yakd 11.69
48 TestAddons/StoppedEnableDisable 11.47
49 TestCertOptions 32.28
50 TestCertExpiration 248.79
51 TestDockerFlags 36.21
58 TestErrorSpam/setup 28.33
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.6
63 TestErrorSpam/stop 11.17
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 67.2
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.5
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
75 TestFunctional/serial/CacheCmd/cache/add_local 0.97
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 43.09
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 10.96
91 TestFunctional/parallel/DryRun 0.57
92 TestFunctional/parallel/InternationalLanguage 0.3
93 TestFunctional/parallel/StatusCmd 1.21
97 TestFunctional/parallel/ServiceCmdConnect 7.58
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 19.93
101 TestFunctional/parallel/SSHCmd 0.8
102 TestFunctional/parallel/CpCmd 2.12
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.23
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 9.06
130 TestFunctional/parallel/ServiceCmd/List 0.59
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 2.67
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
137 TestFunctional/parallel/DockerEnv/bash 1.47
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.25
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
145 TestFunctional/parallel/ImageCommands/Setup 0.69
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 157.18
164 TestMultiControlPlane/serial/DeployApp 8.07
165 TestMultiControlPlane/serial/PingHostFromPods 1.73
166 TestMultiControlPlane/serial/AddWorkerNode 36.06
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.2
169 TestMultiControlPlane/serial/CopyFile 20.65
170 TestMultiControlPlane/serial/StopSecondaryNode 12.13
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
172 TestMultiControlPlane/serial/RestartSecondaryNode 48.5
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.07
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 200.26
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.57
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
177 TestMultiControlPlane/serial/StopCluster 33.3
178 TestMultiControlPlane/serial/RestartCluster 68.46
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
180 TestMultiControlPlane/serial/AddSecondaryNode 75.97
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
184 TestImageBuild/serial/Setup 29.26
185 TestImageBuild/serial/NormalBuild 1.71
186 TestImageBuild/serial/BuildWithBuildArg 1.09
187 TestImageBuild/serial/BuildWithDockerIgnore 1.09
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
193 TestJSONOutput/start/Command 72.37
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.66
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.58
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 11.15
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 29.1
219 TestKicCustomNetwork/use_default_bridge_network 31.01
220 TestKicExistingNetwork 26.83
221 TestKicCustomSubnet 30.42
222 TestKicStaticIP 32.72
223 TestMainNoArgs 0.06
224 TestMinikubeProfile 62.34
227 TestMountStart/serial/StartWithMountFirst 10.17
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 10.09
230 TestMountStart/serial/VerifyMountSecond 0.28
231 TestMountStart/serial/DeleteFirst 1.57
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.28
234 TestMountStart/serial/RestartStopped 8.52
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 84.7
239 TestMultiNode/serial/DeployApp2Nodes 5.94
240 TestMultiNode/serial/PingHostFrom2Pods 1
241 TestMultiNode/serial/AddNode 34.97
242 TestMultiNode/serial/MultiNodeLabels 0.09
243 TestMultiNode/serial/ProfileList 0.72
244 TestMultiNode/serial/CopyFile 10.44
245 TestMultiNode/serial/StopNode 2.45
246 TestMultiNode/serial/StartAfterStop 9.49
247 TestMultiNode/serial/RestartKeepsNodes 74.55
248 TestMultiNode/serial/DeleteNode 5.78
249 TestMultiNode/serial/StopMultiNode 22.14
250 TestMultiNode/serial/RestartMultiNode 50.72
251 TestMultiNode/serial/ValidateNameConflict 32.44
258 TestScheduledStopUnix 98.89
259 TestSkaffold 134.98
261 TestInsufficientStorage 10.83
262 TestRunningBinaryUpgrade 104.32
264 TestKubernetesUpgrade 186.31
265 TestMissingContainerUpgrade 85.69
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
268 TestNoKubernetes/serial/StartWithK8s 36.31
269 TestNoKubernetes/serial/StartWithStopK8s 18.43
270 TestNoKubernetes/serial/Start 9.47
271 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
273 TestNoKubernetes/serial/ProfileList 1.13
274 TestNoKubernetes/serial/Stop 1.34
275 TestNoKubernetes/serial/StartNoArgs 8.19
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
288 TestStoppedBinaryUpgrade/Setup 0.77
289 TestStoppedBinaryUpgrade/Upgrade 331.99
290 TestPreload/Start-NoPreload-PullImage 80.25
291 TestPreload/Restart-With-Preload-Check-User-Image 50.1
301 TestPause/serial/Start 74.56
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
303 TestNetworkPlugins/group/auto/Start 71.14
304 TestPause/serial/SecondStartNoReconfiguration 29.16
305 TestNetworkPlugins/group/auto/KubeletFlags 0.29
306 TestNetworkPlugins/group/auto/NetCatPod 11.28
307 TestPause/serial/Pause 0.96
308 TestNetworkPlugins/group/auto/DNS 0.3
309 TestNetworkPlugins/group/auto/Localhost 0.23
310 TestNetworkPlugins/group/auto/HairPin 0.21
311 TestPause/serial/VerifyStatus 0.46
312 TestPause/serial/Unpause 0.58
313 TestPause/serial/PauseAgain 0.85
314 TestPause/serial/DeletePaused 2.3
315 TestPause/serial/VerifyDeletedResources 0.45
316 TestNetworkPlugins/group/kindnet/Start 57.7
317 TestNetworkPlugins/group/calico/Start 60.05
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
320 TestNetworkPlugins/group/kindnet/NetCatPod 12.64
321 TestNetworkPlugins/group/kindnet/DNS 0.3
322 TestNetworkPlugins/group/kindnet/Localhost 0.24
323 TestNetworkPlugins/group/kindnet/HairPin 0.24
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.43
326 TestNetworkPlugins/group/calico/NetCatPod 12.4
327 TestNetworkPlugins/group/calico/DNS 0.29
328 TestNetworkPlugins/group/calico/Localhost 0.36
329 TestNetworkPlugins/group/calico/HairPin 0.32
330 TestNetworkPlugins/group/custom-flannel/Start 56.78
331 TestNetworkPlugins/group/false/Start 74.65
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
334 TestNetworkPlugins/group/custom-flannel/DNS 0.19
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
337 TestNetworkPlugins/group/enable-default-cni/Start 42.7
338 TestNetworkPlugins/group/false/KubeletFlags 0.36
339 TestNetworkPlugins/group/false/NetCatPod 11.32
340 TestNetworkPlugins/group/false/DNS 0.25
341 TestNetworkPlugins/group/false/Localhost 0.24
342 TestNetworkPlugins/group/false/HairPin 0.21
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.31
345 TestNetworkPlugins/group/flannel/Start 53.63
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
349 TestNetworkPlugins/group/bridge/Start 71.96
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
352 TestNetworkPlugins/group/flannel/NetCatPod 12.44
353 TestNetworkPlugins/group/flannel/DNS 0.36
354 TestNetworkPlugins/group/flannel/Localhost 0.33
355 TestNetworkPlugins/group/flannel/HairPin 0.41
356 TestNetworkPlugins/group/kubenet/Start 47.45
357 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
358 TestNetworkPlugins/group/bridge/NetCatPod 11.4
359 TestNetworkPlugins/group/bridge/DNS 0.28
360 TestNetworkPlugins/group/bridge/Localhost 0.2
361 TestNetworkPlugins/group/bridge/HairPin 0.21
362 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
363 TestNetworkPlugins/group/kubenet/NetCatPod 10.44
364 TestPreload/PreloadSrc/gcs 4.31
365 TestPreload/PreloadSrc/github 4.31
366 TestPreload/PreloadSrc/gcs-cached 0.5
368 TestStartStop/group/old-k8s-version/serial/FirstStart 94.84
369 TestNetworkPlugins/group/kubenet/DNS 0.27
370 TestNetworkPlugins/group/kubenet/Localhost 0.2
371 TestNetworkPlugins/group/kubenet/HairPin 0.22
373 TestStartStop/group/no-preload/serial/FirstStart 79.9
374 TestStartStop/group/old-k8s-version/serial/DeployApp 10.47
375 TestStartStop/group/no-preload/serial/DeployApp 10.35
376 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
377 TestStartStop/group/old-k8s-version/serial/Stop 11.38
378 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
379 TestStartStop/group/no-preload/serial/Stop 11.43
380 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
381 TestStartStop/group/old-k8s-version/serial/SecondStart 30.96
382 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.56
383 TestStartStop/group/no-preload/serial/SecondStart 58.64
384 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13
385 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
386 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
387 TestStartStop/group/old-k8s-version/serial/Pause 3.63
389 TestStartStop/group/embed-certs/serial/FirstStart 68.85
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/no-preload/serial/Pause 3.71
395 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.1
396 TestStartStop/group/embed-certs/serial/DeployApp 9.4
397 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
398 TestStartStop/group/embed-certs/serial/Stop 11.27
399 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
400 TestStartStop/group/embed-certs/serial/SecondStart 56.14
401 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.63
402 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.37
403 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.62
404 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
405 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 29.87
406 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
408 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
410 TestStartStop/group/embed-certs/serial/Pause 3.09
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
413 TestStartStop/group/newest-cni/serial/FirstStart 33.3
414 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
415 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.16
416 TestStartStop/group/newest-cni/serial/DeployApp 0
417 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
418 TestStartStop/group/newest-cni/serial/Stop 11.25
419 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
420 TestStartStop/group/newest-cni/serial/SecondStart 16.5
421 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
422 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
423 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
424 TestStartStop/group/newest-cni/serial/Pause 2.99
x
+
TestDownloadOnly/v1.28.0/json-events (8.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-316671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-316671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.989503856s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 08:20:26.557246    4094 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0110 08:20:26.557317    4094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-316671
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-316671: exit status 85 (89.931745ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-316671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-316671 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:20:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:20:17.607174    4100 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:20:17.607559    4100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:17.607571    4100 out.go:374] Setting ErrFile to fd 2...
	I0110 08:20:17.607577    4100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:17.608286    4100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	W0110 08:20:17.608549    4100 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22427-2299/.minikube/config/config.json: open /home/jenkins/minikube-integration/22427-2299/.minikube/config/config.json: no such file or directory
	I0110 08:20:17.609082    4100 out.go:368] Setting JSON to true
	I0110 08:20:17.609911    4100 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":170,"bootTime":1768033048,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:20:17.610062    4100 start.go:143] virtualization:  
	I0110 08:20:17.615745    4100 out.go:99] [download-only-316671] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0110 08:20:17.615927    4100 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 08:20:17.615959    4100 notify.go:221] Checking for updates...
	I0110 08:20:17.619412    4100 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:20:17.623004    4100 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:20:17.626370    4100 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:20:17.629800    4100 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:20:17.632982    4100 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 08:20:17.638894    4100 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:20:17.639169    4100 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:20:17.670607    4100 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:20:17.670698    4100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:18.076728    4100 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 08:20:18.066891532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:18.076842    4100 docker.go:319] overlay module found
	I0110 08:20:18.080039    4100 out.go:99] Using the docker driver based on user configuration
	I0110 08:20:18.080087    4100 start.go:309] selected driver: docker
	I0110 08:20:18.080095    4100 start.go:928] validating driver "docker" against <nil>
	I0110 08:20:18.080211    4100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:18.140325    4100 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 08:20:18.131668901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:18.140493    4100 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:20:18.140820    4100 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 08:20:18.140984    4100 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:20:18.144194    4100 out.go:171] Using Docker driver with root privileges
	I0110 08:20:18.147227    4100 cni.go:84] Creating CNI manager for ""
	I0110 08:20:18.147301    4100 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 08:20:18.147316    4100 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 08:20:18.147395    4100 start.go:353] cluster config:
	{Name:download-only-316671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-316671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:20:18.150453    4100 out.go:99] Starting "download-only-316671" primary control-plane node in "download-only-316671" cluster
	I0110 08:20:18.150490    4100 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 08:20:18.153489    4100 out.go:99] Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:20:18.153539    4100 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 08:20:18.153699    4100 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:20:18.169070    4100 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:20:18.169239    4100 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:20:18.169385    4100 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:20:18.202155    4100 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 08:20:18.202180    4100 cache.go:65] Caching tarball of preloaded images
	I0110 08:20:18.202345    4100 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 08:20:18.205847    4100 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0110 08:20:18.205878    4100 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 08:20:18.205886    4100 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I0110 08:20:18.284866    4100 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I0110 08:20:18.284999    4100 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 08:20:21.499919    4100 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0110 08:20:21.500292    4100 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/download-only-316671/config.json ...
	I0110 08:20:21.500327    4100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/download-only-316671/config.json: {Name:mk01244c098ca6634b523f41b68e7eee9bf26346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:21.500505    4100 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 08:20:21.500720    4100 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22427-2299/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-316671 host does not exist
	  To start a cluster, run: "minikube start -p download-only-316671"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-316671
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-298170 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-298170 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.412732143s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 08:20:30.405389    4094 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 08:20:30.405424    4094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-298170
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-298170: exit status 85 (85.943786ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-316671 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-316671 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ delete  │ -p download-only-316671                                                                                                                                                       │ download-only-316671 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-298170 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-298170 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:20:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:20:27.040154    4300 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:20:27.040276    4300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:27.040288    4300 out.go:374] Setting ErrFile to fd 2...
	I0110 08:20:27.040294    4300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:27.040536    4300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:20:27.040939    4300 out.go:368] Setting JSON to true
	I0110 08:20:27.041692    4300 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":179,"bootTime":1768033048,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:20:27.041755    4300 start.go:143] virtualization:  
	I0110 08:20:27.045200    4300 out.go:99] [download-only-298170] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:20:27.045467    4300 notify.go:221] Checking for updates...
	I0110 08:20:27.048359    4300 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:20:27.051443    4300 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:20:27.054419    4300 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:20:27.057330    4300 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:20:27.060331    4300 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 08:20:27.066021    4300 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:20:27.066297    4300 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:20:27.090412    4300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:20:27.090511    4300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:27.158276    4300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 08:20:27.149315908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:27.158377    4300 docker.go:319] overlay module found
	I0110 08:20:27.161329    4300 out.go:99] Using the docker driver based on user configuration
	I0110 08:20:27.161391    4300 start.go:309] selected driver: docker
	I0110 08:20:27.161402    4300 start.go:928] validating driver "docker" against <nil>
	I0110 08:20:27.161518    4300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:27.217224    4300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 08:20:27.208509837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:27.217389    4300 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:20:27.217679    4300 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 08:20:27.217830    4300 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:20:27.220967    4300 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-298170 host does not exist
	  To start a cluster, run: "minikube start -p download-only-298170"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-298170
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 08:20:31.540547    4094 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-103632 --alsologtostderr --binary-mirror http://127.0.0.1:44357 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-103632" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-103632
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (77.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-609612 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-609612 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m14.824131192s)
helpers_test.go:176: Cleaning up "offline-docker-609612" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-609612
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-609612: (2.334097098s)
--- PASS: TestOffline (77.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-010290
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-010290: exit status 85 (84.816239ms)

                                                
                                                
-- stdout --
	* Profile "addons-010290" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010290"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-010290
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-010290: exit status 85 (75.446765ms)

                                                
                                                
-- stdout --
	* Profile "addons-010290" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-010290"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (139.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-010290 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-010290 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.231852367s)
--- PASS: TestAddons/Setup (139.23s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.62s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 45.473784ms
addons_test.go:886: volcano-controller stabilized in 45.985021ms
addons_test.go:870: volcano-scheduler stabilized in 46.29616ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-868r7" [ba0da92c-81ad-4690-ac74-9dc7efdc098a] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003222874s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-77rwk" [b0014180-d334-4f11-9a43-e21764fa4228] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003019649s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-j59qj" [9dec859e-119b-49bc-8b91-c0a58428fe6b] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003458826s
addons_test.go:905: (dbg) Run:  kubectl --context addons-010290 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-010290 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-010290 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [8882f040-3f79-42dd-abcd-043a632b3d66] Pending
helpers_test.go:353: "test-job-nginx-0" [8882f040-3f79-42dd-abcd-043a632b3d66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [8882f040-3f79-42dd-abcd-043a632b3d66] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003970041s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable volcano --alsologtostderr -v=1: (11.960370232s)
--- PASS: TestAddons/serial/Volcano (41.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-010290 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-010290 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-010290 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-010290 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cd1643e7-5040-4fde-bc7b-50a1a5308bbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cd1643e7-5040-4fde-bc7b-50a1a5308bbb] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004662645s
addons_test.go:696: (dbg) Run:  kubectl --context addons-010290 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-010290 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-010290 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-010290 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.010441ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-vq7kg" [c1d399b0-27e5-4d4c-b281-ba81733d6b58] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00336964s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ttnvj" [28b7a0b2-9068-446f-bb04-3c694a6d067b] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00395887s
addons_test.go:394: (dbg) Run:  kubectl --context addons-010290 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-010290 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-010290 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.905666617s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 ip
2026/01/10 08:24:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.84s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.944638ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-010290
addons_test.go:334: (dbg) Run:  kubectl --context addons-010290 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-010290 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-010290 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-010290 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [67e43734-37db-4966-b45f-c8f835ecc7c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [67e43734-37db-4966-b45f-c8f835ecc7c8] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003488325s
I0110 08:24:37.352661    4094 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-010290 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable ingress-dns --alsologtostderr -v=1: (1.526637547s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable ingress --alsologtostderr -v=1: (7.854453957s)
--- PASS: TestAddons/parallel/Ingress (20.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-97fdf" [3c6b28ec-1385-414d-a288-1d366e0d7fb8] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004699692s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable inspektor-gadget --alsologtostderr -v=1: (5.782883887s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.428631ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-6gtj8" [cf293268-d256-4559-87ef-f698141b0be1] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003293809s
addons_test.go:465: (dbg) Run:  kubectl --context addons-010290 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 08:24:09.283977    4094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 08:24:09.288449    4094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 08:24:09.288492    4094 kapi.go:107] duration metric: took 8.078359ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.101589ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-010290 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-010290 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [2e7d2858-a7fd-4330-80b5-e2a8365358f5] Pending
helpers_test.go:353: "task-pv-pod" [2e7d2858-a7fd-4330-80b5-e2a8365358f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [2e7d2858-a7fd-4330-80b5-e2a8365358f5] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.023386963s
addons_test.go:574: (dbg) Run:  kubectl --context addons-010290 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-010290 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-010290 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-010290 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-010290 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-010290 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-010290 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d70d8f27-d999-45ad-abbe-4b7bc396b23d] Pending
helpers_test.go:353: "task-pv-pod-restore" [d70d8f27-d999-45ad-abbe-4b7bc396b23d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d70d8f27-d999-45ad-abbe-4b7bc396b23d] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003005122s
addons_test.go:616: (dbg) Run:  kubectl --context addons-010290 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-010290 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-010290 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.133671304s)
--- PASS: TestAddons/parallel/CSI (51.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-010290 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-4v9ns" [56850e54-ed4a-4b65-9967-729221ff1d07] Pending
helpers_test.go:353: "headlamp-6d8d595f-4v9ns" [56850e54-ed4a-4b65-9967-729221ff1d07] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-4v9ns" [56850e54-ed4a-4b65-9967-729221ff1d07] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004477167s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable headlamp --alsologtostderr -v=1: (5.808863491s)
--- PASS: TestAddons/parallel/Headlamp (16.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-bfx7s" [1b79ac09-ee3e-4df1-8775-17485c610dda] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002617392s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-010290 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-010290 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b8e35867-2e84-4871-ab8c-e15d1e88a4e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b8e35867-2e84-4871-ab8c-e15d1e88a4e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b8e35867-2e84-4871-ab8c-e15d1e88a4e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00392577s
addons_test.go:969: (dbg) Run:  kubectl --context addons-010290 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 ssh "cat /opt/local-path-provisioner/pvc-e2d6e4cf-9103-440e-8e35-c6d4881daa24_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-010290 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-010290 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.112593689s)
--- PASS: TestAddons/parallel/LocalPath (54.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-hvzns" [1b851d84-d3d7-4d51-b644-df7430138df2] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00542713s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-gxzsh" [79a1e41e-2570-4074-92b3-df9e0d899656] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002960554s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-010290 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-010290 addons disable yakd --alsologtostderr -v=1: (5.689778172s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-010290
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-010290: (11.202263351s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-010290
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-010290
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-010290
--- PASS: TestAddons/StoppedEnableDisable (11.47s)

                                                
                                    
x
+
TestCertOptions (32.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-218382 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-218382 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (29.293911631s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-218382 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-218382 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-218382 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-218382" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-218382
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-218382: (2.246825138s)
--- PASS: TestCertOptions (32.28s)

                                                
                                    
x
+
TestCertExpiration (248.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-989786 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0110 09:07:48.220623    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:07:51.602755    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-989786 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (33.042725479s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-989786 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-989786 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.614959627s)
helpers_test.go:176: Cleaning up "cert-expiration-989786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-989786
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-989786: (3.125608866s)
--- PASS: TestCertExpiration (248.79s)

                                                
                                    
x
+
TestDockerFlags (36.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-543601 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0110 09:07:31.886126    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-543601 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.098481813s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-543601 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-543601 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-543601" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-543601
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-543601: (2.35724016s)
--- PASS: TestDockerFlags (36.21s)

                                                
                                    
x
+
TestErrorSpam/setup (28.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-491448 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-491448 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-491448 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-491448 --driver=docker  --container-runtime=docker: (28.334703946s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (28.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (11.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 stop: (10.982360666s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-491448 --log_dir /tmp/nospam-491448 stop
--- PASS: TestErrorSpam/stop (11.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/test/nested/copy/4094/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-580534 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m7.201205568s)
--- PASS: TestFunctional/serial/StartWithProxy (67.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 08:27:50.691479    4094 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --alsologtostderr -v=8
E0110 08:27:51.605580    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.611344    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.621617    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.642023    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.682285    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.762576    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:51.922973    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.243651    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.884505    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:54.165196    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:56.726683    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:28:01.846844    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:28:12.087592    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-580534 --alsologtostderr -v=8: (40.489165616s)
functional_test.go:678: soft start took 40.494463892s for "functional-580534" cluster.
I0110 08:28:31.183065    4094 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (40.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-580534 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:3.1: (1.138881016s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:3.3
E0110 08:28:32.568389    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:3.3: (1.102092322s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 cache add registry.k8s.io/pause:latest: (1.035368009s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-580534 /tmp/TestFunctionalserialCacheCmdcacheadd_local3242649127/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache add minikube-local-cache-test:functional-580534
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache delete minikube-local-cache-test:functional-580534
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-580534
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.635231ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 kubectl -- --context functional-580534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-580534 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0110 08:29:13.529506    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-580534 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.089956686s)
functional_test.go:776: restart took 43.090056929s for "functional-580534" cluster.
I0110 08:29:21.160579    4094 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (43.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-580534 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 logs: (1.238529864s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 logs --file /tmp/TestFunctionalserialLogsFileCmd3746270685/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 logs --file /tmp/TestFunctionalserialLogsFileCmd3746270685/001/logs.txt: (1.234281885s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-580534 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-580534
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-580534: exit status 115 (614.178819ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31118 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-580534 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 config get cpus: exit status 14 (78.34096ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 config get cpus: exit status 14 (56.869921ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-580534 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-580534 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 45903: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-580534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (223.686265ms)

                                                
                                                
-- stdout --
	* [functional-580534] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:58.413661   45286 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:58.413869   45286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.413897   45286 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:58.413916   45286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.414215   45286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:29:58.414640   45286 out.go:368] Setting JSON to false
	I0110 08:29:58.415612   45286 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":751,"bootTime":1768033048,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:29:58.415714   45286 start.go:143] virtualization:  
	I0110 08:29:58.420856   45286 out.go:179] * [functional-580534] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:29:58.423962   45286 notify.go:221] Checking for updates...
	I0110 08:29:58.427370   45286 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:29:58.430789   45286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:29:58.434023   45286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:29:58.436952   45286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:29:58.439761   45286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:29:58.442607   45286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:29:58.445918   45286 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:29:58.446499   45286 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:29:58.489481   45286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:29:58.489584   45286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:29:58.564927   45286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 08:29:58.554617659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:29:58.565031   45286 docker.go:319] overlay module found
	I0110 08:29:58.568368   45286 out.go:179] * Using the docker driver based on existing profile
	I0110 08:29:58.571283   45286 start.go:309] selected driver: docker
	I0110 08:29:58.571304   45286 start.go:928] validating driver "docker" against &{Name:functional-580534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-580534 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:29:58.571393   45286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:29:58.574808   45286 out.go:203] 
	W0110 08:29:58.577688   45286 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 08:29:58.580424   45286 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-580534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-580534 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (298.198574ms)

                                                
                                                
-- stdout --
	* [functional-580534] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:58.128945   45184 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:58.129152   45184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.129179   45184 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:58.129199   45184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.130133   45184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:29:58.130603   45184 out.go:368] Setting JSON to false
	I0110 08:29:58.131619   45184 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":751,"bootTime":1768033048,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0110 08:29:58.131714   45184 start.go:143] virtualization:  
	I0110 08:29:58.135615   45184 out.go:179] * [functional-580534] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0110 08:29:58.138593   45184 notify.go:221] Checking for updates...
	I0110 08:29:58.142113   45184 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:29:58.145178   45184 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:29:58.148097   45184 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	I0110 08:29:58.150893   45184 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	I0110 08:29:58.153688   45184 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:29:58.156472   45184 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:29:58.160844   45184 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:29:58.162689   45184 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:29:58.217332   45184 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:29:58.217558   45184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:29:58.340509   45184 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 08:29:58.330095792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:29:58.340617   45184 docker.go:319] overlay module found
	I0110 08:29:58.344016   45184 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 08:29:58.346812   45184 start.go:309] selected driver: docker
	I0110 08:29:58.346832   45184 start.go:928] validating driver "docker" against &{Name:functional-580534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-580534 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:29:58.346930   45184 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:29:58.350955   45184 out.go:203] 
	W0110 08:29:58.353760   45184 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 08:29:58.356615   45184 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-580534 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-580534 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-vn2bt" [a6422076-382f-4a83-950d-6c53778afe88] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-vn2bt" [a6422076-382f-4a83-950d-6c53778afe88] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006515347s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30884
functional_test.go:1685: http://192.168.49.2:30884: success! body:
Request served by hello-node-connect-5d95464fd4-vn2bt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30884
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7e92f893-a668-4285-8396-e01d76d91998] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003644174s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-580534 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-580534 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-580534 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-580534 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e53a2dcb-f2d9-408a-a3e8-2d71cf191b3e] Pending
helpers_test.go:353: "sp-pod" [e53a2dcb-f2d9-408a-a3e8-2d71cf191b3e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00307186s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-580534 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-580534 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-580534 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c0d9be7f-c3ed-458b-b72a-5cfbf3a87822] Pending
helpers_test.go:353: "sp-pod" [c0d9be7f-c3ed-458b-b72a-5cfbf3a87822] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003733651s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-580534 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh -n functional-580534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cp functional-580534:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3530543318/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh -n functional-580534 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh -n functional-580534 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4094/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /etc/test/nested/copy/4094/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4094.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /etc/ssl/certs/4094.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4094.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /usr/share/ca-certificates/4094.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/40942.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /etc/ssl/certs/40942.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/40942.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /usr/share/ca-certificates/40942.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-580534 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh "sudo systemctl is-active crio": exit status 1 (349.873624ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 42165: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-580534 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b5ba4cb0-503a-41da-807a-4b0b48be0abf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b5ba4cb0-503a-41da-807a-4b0b48be0abf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002986146s
I0110 08:29:39.461813    4094 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-580534 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.244.28 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-580534 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-580534 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-580534 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-jcxbb" [e1d0097b-bdad-4fc9-8a17-c3ec05a0d029] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-jcxbb" [e1d0097b-bdad-4fc9-8a17-c3ec05a0d029] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.002905755s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "363.774394ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "63.265799ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "356.407393ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "69.194723ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdany-port1798069458/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768033790040945951" to /tmp/TestFunctionalparallelMountCmdany-port1798069458/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768033790040945951" to /tmp/TestFunctionalparallelMountCmdany-port1798069458/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768033790040945951" to /tmp/TestFunctionalparallelMountCmdany-port1798069458/001/test-1768033790040945951
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.727467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:29:50.382620    4094 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 test-1768033790040945951
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh cat /mount-9p/test-1768033790040945951
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-580534 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [80a2dd5e-d338-455a-8f8f-6e345f16615e] Pending
helpers_test.go:353: "busybox-mount" [80a2dd5e-d338-455a-8f8f-6e345f16615e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [80a2dd5e-d338-455a-8f8f-6e345f16615e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [80a2dd5e-d338-455a-8f8f-6e345f16615e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008155092s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-580534 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdany-port1798069458/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service list -o json
functional_test.go:1509: Took "515.026801ms" to run "out/minikube-linux-arm64 -p functional-580534 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30385
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30385
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdspecific-port2089408853/001:/mount-9p --alsologtostderr -v=1 --port 35875]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.327887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:29:59.525177    4094 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdspecific-port2089408853/001:/mount-9p --alsologtostderr -v=1 --port 35875] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh "sudo umount -f /mount-9p": exit status 1 (442.12004ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-580534 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdspecific-port2089408853/001:/mount-9p --alsologtostderr -v=1 --port 35875] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T" /mount1: (1.065293207s)
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-580534 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-580534 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3943271879/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-580534 docker-env) && out/minikube-linux-arm64 status -p functional-580534"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-580534 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 version -o=json --components: (1.24722708s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-580534 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-580534
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-580534 image ls --format short --alsologtostderr:
I0110 08:30:14.416535   48760 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:14.416719   48760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.416725   48760 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:14.416730   48760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.417000   48760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:30:14.417671   48760 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.417787   48760 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.418324   48760 cli_runner.go:164] Run: docker container inspect functional-580534 --format={{.State.Status}}
I0110 08:30:14.445968   48760 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:14.446029   48760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580534
I0110 08:30:14.473496   48760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/functional-580534/id_rsa Username:docker}
I0110 08:30:14.577559   48760 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-580534 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-580534 │ 07120aefffb06 │ 30B    │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 611c6647fcbbc │ 61.2MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-580534 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-580534 image ls --format table --alsologtostderr:
I0110 08:30:14.979664   48933 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:14.984720   48933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.984744   48933 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:14.984750   48933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.985141   48933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:30:14.988518   48933 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.989578   48933 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.990470   48933 cli_runner.go:164] Run: docker container inspect functional-580534 --format={{.State.Status}}
I0110 08:30:15.017338   48933 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:15.017413   48933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580534
I0110 08:30:15.043063   48933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/functional-580534/id_rsa Username:docker}
I0110 08:30:15.148880   48933 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-580534 image ls --format json --alsologtostderr:
[{"id":"07120aefffb061598823d2e18e84a4d9f982464f6a5755e21de752590cef409d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-580534"],"size":"30"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"61200000"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["regi
stry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"805
7e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-580534 image ls --format json --alsologtostderr:
I0110 08:30:14.717892   48837 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:14.718109   48837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.718123   48837 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:14.718130   48837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.718498   48837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:30:14.719317   48837 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.719517   48837 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.720199   48837 cli_runner.go:164] Run: docker container inspect functional-580534 --format={{.State.Status}}
I0110 08:30:14.752475   48837 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:14.752537   48837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580534
I0110 08:30:14.773171   48837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/functional-580534/id_rsa Username:docker}
I0110 08:30:14.887687   48837 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-580534 image ls --format yaml --alsologtostderr:
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 07120aefffb061598823d2e18e84a4d9f982464f6a5755e21de752590cef409d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-580534
size: "30"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "61200000"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-580534 image ls --format yaml --alsologtostderr:
I0110 08:30:14.424331   48764 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:14.424524   48764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.424553   48764 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:14.424575   48764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:14.424851   48764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:30:14.425525   48764 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.425685   48764 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:14.426292   48764 cli_runner.go:164] Run: docker container inspect functional-580534 --format={{.State.Status}}
I0110 08:30:14.448234   48764 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:14.448293   48764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580534
I0110 08:30:14.474427   48764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/functional-580534/id_rsa Username:docker}
I0110 08:30:14.588469   48764 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-580534 ssh pgrep buildkitd: exit status 1 (352.96764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image build -t localhost/my-image:functional-580534 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-580534 image build -t localhost/my-image:functional-580534 testdata/build --alsologtostderr: (3.158917403s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-580534 image build -t localhost/my-image:functional-580534 testdata/build --alsologtostderr:
I0110 08:30:15.042423   48939 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:15.042633   48939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:15.042639   48939 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:15.042644   48939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:15.043994   48939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:30:15.044943   48939 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:15.046603   48939 config.go:182] Loaded profile config "functional-580534": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:30:15.047538   48939 cli_runner.go:164] Run: docker container inspect functional-580534 --format={{.State.Status}}
I0110 08:30:15.085750   48939 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:15.085805   48939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580534
I0110 08:30:15.110540   48939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/functional-580534/id_rsa Username:docker}
I0110 08:30:15.224124   48939 build_images.go:162] Building image from path: /tmp/build.698842225.tar
I0110 08:30:15.224219   48939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 08:30:15.232143   48939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.698842225.tar
I0110 08:30:15.235908   48939 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.698842225.tar: stat -c "%s %y" /var/lib/minikube/build/build.698842225.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.698842225.tar': No such file or directory
I0110 08:30:15.235937   48939 ssh_runner.go:362] scp /tmp/build.698842225.tar --> /var/lib/minikube/build/build.698842225.tar (3072 bytes)
I0110 08:30:15.254461   48939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.698842225
I0110 08:30:15.262824   48939 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.698842225 -xf /var/lib/minikube/build/build.698842225.tar
I0110 08:30:15.271188   48939 docker.go:364] Building image: /var/lib/minikube/build/build.698842225
I0110 08:30:15.271290   48939 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-580534 /var/lib/minikube/build/build.698842225
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:33c788db84aab6d2a0ce90f0a32389c3cff7e692d39ac9d4b6a0a3207ab08e69 done
#8 naming to localhost/my-image:functional-580534 done
#8 DONE 0.1s
I0110 08:30:18.104952   48939 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-580534 /var/lib/minikube/build/build.698842225: (2.833635722s)
I0110 08:30:18.105032   48939 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.698842225
I0110 08:30:18.113410   48939 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.698842225.tar
I0110 08:30:18.121743   48939 build_images.go:218] Built localhost/my-image:functional-580534 from /tmp/build.698842225.tar
I0110 08:30:18.121775   48939 build_images.go:134] succeeded building to: functional-580534
I0110 08:30:18.121780   48939 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
2026/01/10 08:30:09 [DEBUG] GET http://127.0.0.1:37711/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-580534 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580534
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-580534
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-580534
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (157.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0110 08:30:35.449928    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:32:51.602761    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m36.220788144s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (157.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 kubectl -- rollout status deployment/busybox: (5.030159203s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-bjwmd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-tpvzh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-vmlf5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-bjwmd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-tpvzh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-vmlf5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-bjwmd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-tpvzh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-vmlf5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-bjwmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-bjwmd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-tpvzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-tpvzh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-vmlf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 kubectl -- exec busybox-769dd8b7dd-vmlf5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node add --alsologtostderr -v 5
E0110 08:33:19.290116    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 node add --alsologtostderr -v 5: (34.897503688s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5: (1.162995223s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-409373 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.204136213s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 status --output json --alsologtostderr -v 5: (1.131725562s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp testdata/cp-test.txt ha-409373:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1042000202/001/cp-test_ha-409373.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373:/home/docker/cp-test.txt ha-409373-m02:/home/docker/cp-test_ha-409373_ha-409373-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test_ha-409373_ha-409373-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373:/home/docker/cp-test.txt ha-409373-m03:/home/docker/cp-test_ha-409373_ha-409373-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test_ha-409373_ha-409373-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373:/home/docker/cp-test.txt ha-409373-m04:/home/docker/cp-test_ha-409373_ha-409373-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test_ha-409373_ha-409373-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp testdata/cp-test.txt ha-409373-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1042000202/001/cp-test_ha-409373-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m02:/home/docker/cp-test.txt ha-409373:/home/docker/cp-test_ha-409373-m02_ha-409373.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test_ha-409373-m02_ha-409373.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m02:/home/docker/cp-test.txt ha-409373-m03:/home/docker/cp-test_ha-409373-m02_ha-409373-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test_ha-409373-m02_ha-409373-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m02:/home/docker/cp-test.txt ha-409373-m04:/home/docker/cp-test_ha-409373-m02_ha-409373-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test_ha-409373-m02_ha-409373-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp testdata/cp-test.txt ha-409373-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1042000202/001/cp-test_ha-409373-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m03:/home/docker/cp-test.txt ha-409373:/home/docker/cp-test_ha-409373-m03_ha-409373.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test_ha-409373-m03_ha-409373.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m03:/home/docker/cp-test.txt ha-409373-m02:/home/docker/cp-test_ha-409373-m03_ha-409373-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test_ha-409373-m03_ha-409373-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m03:/home/docker/cp-test.txt ha-409373-m04:/home/docker/cp-test_ha-409373-m03_ha-409373-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test_ha-409373-m03_ha-409373-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp testdata/cp-test.txt ha-409373-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1042000202/001/cp-test_ha-409373-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m04:/home/docker/cp-test.txt ha-409373:/home/docker/cp-test_ha-409373-m04_ha-409373.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373 "sudo cat /home/docker/cp-test_ha-409373-m04_ha-409373.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m04:/home/docker/cp-test.txt ha-409373-m02:/home/docker/cp-test_ha-409373-m04_ha-409373-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m02 "sudo cat /home/docker/cp-test_ha-409373-m04_ha-409373-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 cp ha-409373-m04:/home/docker/cp-test.txt ha-409373-m03:/home/docker/cp-test_ha-409373-m04_ha-409373-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 ssh -n ha-409373-m03 "sudo cat /home/docker/cp-test_ha-409373-m04_ha-409373-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 node stop m02 --alsologtostderr -v 5: (11.283357918s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5: exit status 7 (844.656156ms)

                                                
                                                
-- stdout --
	ha-409373
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409373-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409373-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409373-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:34:17.364506   70881 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:34:17.364677   70881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:34:17.364691   70881 out.go:374] Setting ErrFile to fd 2...
	I0110 08:34:17.364698   70881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:34:17.364981   70881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:34:17.365260   70881 out.go:368] Setting JSON to false
	I0110 08:34:17.365309   70881 mustload.go:66] Loading cluster: ha-409373
	I0110 08:34:17.365453   70881 notify.go:221] Checking for updates...
	I0110 08:34:17.365861   70881 config.go:182] Loaded profile config "ha-409373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:34:17.365888   70881 status.go:174] checking status of ha-409373 ...
	I0110 08:34:17.369531   70881 cli_runner.go:164] Run: docker container inspect ha-409373 --format={{.State.Status}}
	I0110 08:34:17.390652   70881 status.go:371] ha-409373 host status = "Running" (err=<nil>)
	I0110 08:34:17.390676   70881 host.go:66] Checking if "ha-409373" exists ...
	I0110 08:34:17.391159   70881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409373
	I0110 08:34:17.421410   70881 host.go:66] Checking if "ha-409373" exists ...
	I0110 08:34:17.421793   70881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:34:17.421845   70881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409373
	I0110 08:34:17.450145   70881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/ha-409373/id_rsa Username:docker}
	I0110 08:34:17.560114   70881 ssh_runner.go:195] Run: systemctl --version
	I0110 08:34:17.566648   70881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:34:17.581927   70881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:34:17.659474   70881 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-10 08:34:17.64654942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:34:17.660007   70881 kubeconfig.go:125] found "ha-409373" server: "https://192.168.49.254:8443"
	I0110 08:34:17.660038   70881 api_server.go:166] Checking apiserver status ...
	I0110 08:34:17.660084   70881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:34:17.677462   70881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2109/cgroup
	I0110 08:34:17.686972   70881 api_server.go:192] apiserver freezer: "8:freezer:/docker/e45feb0f1b8b7c5491a4e11bac4ad6e891fe137b90dfe9fa8810583d2b6c5704/kubepods/burstable/pod2d1f8ed3939c89d660897d2c7862ded8/10e869c1c9d409069a90597586ad0625fc0b0a19f4a0b5c5f625bbe6b9170d22"
	I0110 08:34:17.687049   70881 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e45feb0f1b8b7c5491a4e11bac4ad6e891fe137b90dfe9fa8810583d2b6c5704/kubepods/burstable/pod2d1f8ed3939c89d660897d2c7862ded8/10e869c1c9d409069a90597586ad0625fc0b0a19f4a0b5c5f625bbe6b9170d22/freezer.state
	I0110 08:34:17.695889   70881 api_server.go:214] freezer state: "THAWED"
	I0110 08:34:17.695924   70881 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:34:17.710229   70881 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:34:17.710271   70881 status.go:463] ha-409373 apiserver status = Running (err=<nil>)
	I0110 08:34:17.710281   70881 status.go:176] ha-409373 status: &{Name:ha-409373 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:34:17.710299   70881 status.go:174] checking status of ha-409373-m02 ...
	I0110 08:34:17.710662   70881 cli_runner.go:164] Run: docker container inspect ha-409373-m02 --format={{.State.Status}}
	I0110 08:34:17.729574   70881 status.go:371] ha-409373-m02 host status = "Stopped" (err=<nil>)
	I0110 08:34:17.729604   70881 status.go:384] host is not running, skipping remaining checks
	I0110 08:34:17.729612   70881 status.go:176] ha-409373-m02 status: &{Name:ha-409373-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:34:17.729630   70881 status.go:174] checking status of ha-409373-m03 ...
	I0110 08:34:17.729941   70881 cli_runner.go:164] Run: docker container inspect ha-409373-m03 --format={{.State.Status}}
	I0110 08:34:17.747737   70881 status.go:371] ha-409373-m03 host status = "Running" (err=<nil>)
	I0110 08:34:17.747764   70881 host.go:66] Checking if "ha-409373-m03" exists ...
	I0110 08:34:17.748057   70881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409373-m03
	I0110 08:34:17.766351   70881 host.go:66] Checking if "ha-409373-m03" exists ...
	I0110 08:34:17.766670   70881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:34:17.766715   70881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409373-m03
	I0110 08:34:17.784281   70881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/ha-409373-m03/id_rsa Username:docker}
	I0110 08:34:17.888570   70881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:34:17.904820   70881 kubeconfig.go:125] found "ha-409373" server: "https://192.168.49.254:8443"
	I0110 08:34:17.904910   70881 api_server.go:166] Checking apiserver status ...
	I0110 08:34:17.904989   70881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:34:17.920681   70881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup
	I0110 08:34:17.929491   70881 api_server.go:192] apiserver freezer: "8:freezer:/docker/114d01778c2a412026e9663aedd0acad065484180b3eb88eaf52dedd3b3d9d6e/kubepods/burstable/pod06fbf35f57ddee1f2ad497cfc8c2aded/5acd6db78ef8eb957934e6129c8cd6fb98e57685147ef4443f7348f1eaa04ecc"
	I0110 08:34:17.929582   70881 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/114d01778c2a412026e9663aedd0acad065484180b3eb88eaf52dedd3b3d9d6e/kubepods/burstable/pod06fbf35f57ddee1f2ad497cfc8c2aded/5acd6db78ef8eb957934e6129c8cd6fb98e57685147ef4443f7348f1eaa04ecc/freezer.state
	I0110 08:34:17.937502   70881 api_server.go:214] freezer state: "THAWED"
	I0110 08:34:17.937532   70881 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:34:17.946094   70881 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:34:17.946126   70881 status.go:463] ha-409373-m03 apiserver status = Running (err=<nil>)
	I0110 08:34:17.946135   70881 status.go:176] ha-409373-m03 status: &{Name:ha-409373-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:34:17.946153   70881 status.go:174] checking status of ha-409373-m04 ...
	I0110 08:34:17.946465   70881 cli_runner.go:164] Run: docker container inspect ha-409373-m04 --format={{.State.Status}}
	I0110 08:34:17.964697   70881 status.go:371] ha-409373-m04 host status = "Running" (err=<nil>)
	I0110 08:34:17.964725   70881 host.go:66] Checking if "ha-409373-m04" exists ...
	I0110 08:34:17.965043   70881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409373-m04
	I0110 08:34:17.986874   70881 host.go:66] Checking if "ha-409373-m04" exists ...
	I0110 08:34:17.987181   70881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:34:17.987230   70881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409373-m04
	I0110 08:34:18.011815   70881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/ha-409373-m04/id_rsa Username:docker}
	I0110 08:34:18.123342   70881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:34:18.146558   70881 status.go:176] ha-409373-m04 status: &{Name:ha-409373-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node start m02 --alsologtostderr -v 5
E0110 08:34:28.841573    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:28.846848    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:28.857143    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:28.877474    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:28.917748    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:28.998034    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:29.158537    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:29.478780    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:30.119276    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:31.399843    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:33.961562    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:39.082000    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:49.322936    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 node start m02 --alsologtostderr -v 5: (47.269905563s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5: (1.132962886s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.071368894s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 stop --alsologtostderr -v 5
E0110 08:35:09.803257    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 stop --alsologtostderr -v 5: (34.760148357s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 start --wait true --alsologtostderr -v 5
E0110 08:35:50.764093    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:37:12.684255    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:37:51.602110    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 start --wait true --alsologtostderr -v 5: (2m45.359970542s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 node delete m03 --alsologtostderr -v 5: (10.522212594s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 stop --alsologtostderr -v 5: (33.18827722s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5: exit status 7 (106.838611ms)

                                                
                                                
-- stdout --
	ha-409373
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409373-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409373-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:39:14.434600   98999 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:39:14.435041   98999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:39:14.435159   98999 out.go:374] Setting ErrFile to fd 2...
	I0110 08:39:14.435180   98999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:39:14.435471   98999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:39:14.435685   98999 out.go:368] Setting JSON to false
	I0110 08:39:14.435715   98999 mustload.go:66] Loading cluster: ha-409373
	I0110 08:39:14.436138   98999 config.go:182] Loaded profile config "ha-409373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:39:14.436166   98999 status.go:174] checking status of ha-409373 ...
	I0110 08:39:14.436680   98999 cli_runner.go:164] Run: docker container inspect ha-409373 --format={{.State.Status}}
	I0110 08:39:14.437185   98999 notify.go:221] Checking for updates...
	I0110 08:39:14.455707   98999 status.go:371] ha-409373 host status = "Stopped" (err=<nil>)
	I0110 08:39:14.455731   98999 status.go:384] host is not running, skipping remaining checks
	I0110 08:39:14.455744   98999 status.go:176] ha-409373 status: &{Name:ha-409373 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:39:14.455776   98999 status.go:174] checking status of ha-409373-m02 ...
	I0110 08:39:14.456074   98999 cli_runner.go:164] Run: docker container inspect ha-409373-m02 --format={{.State.Status}}
	I0110 08:39:14.476147   98999 status.go:371] ha-409373-m02 host status = "Stopped" (err=<nil>)
	I0110 08:39:14.476164   98999 status.go:384] host is not running, skipping remaining checks
	I0110 08:39:14.476170   98999 status.go:176] ha-409373-m02 status: &{Name:ha-409373-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:39:14.476189   98999 status.go:174] checking status of ha-409373-m04 ...
	I0110 08:39:14.476472   98999 cli_runner.go:164] Run: docker container inspect ha-409373-m04 --format={{.State.Status}}
	I0110 08:39:14.492722   98999 status.go:371] ha-409373-m04 host status = "Stopped" (err=<nil>)
	I0110 08:39:14.492741   98999 status.go:384] host is not running, skipping remaining checks
	I0110 08:39:14.492747   98999 status.go:176] ha-409373-m04 status: &{Name:ha-409373-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0110 08:39:28.841618    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:39:56.524443    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m7.397213271s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 node add --control-plane --alsologtostderr -v 5: (1m14.787126126s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-409373 status --alsologtostderr -v 5: (1.184122342s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.079789705s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-844733 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-844733 --driver=docker  --container-runtime=docker: (29.258122834s)
--- PASS: TestImageBuild/serial/Setup (29.26s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-844733
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-844733: (1.707198173s)
--- PASS: TestImageBuild/serial/NormalBuild (1.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-844733
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-844733: (1.093894846s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-844733
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-844733: (1.092173347s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-844733
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-630258 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0110 08:42:51.602539    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-630258 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m12.361216069s)
--- PASS: TestJSONOutput/start/Command (72.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-630258 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-630258 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-630258 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-630258 --output=json --user=testUser: (11.149441752s)
--- PASS: TestJSONOutput/stop/Command (11.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-184756 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-184756 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.061131ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b1c9fee-9456-4c49-8dcd-769198017052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-184756] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"829cba65-acb5-4384-b00d-b58193b3671c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"f51257b0-b403-4e87-8921-ea0179acb29c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcbf716f-5a8e-459b-8f5b-933cc7054610","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig"}}
	{"specversion":"1.0","id":"22ff3153-6b0a-4657-b6d6-ec3e9dbcc9fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube"}}
	{"specversion":"1.0","id":"a468ffaf-bec6-4859-85e1-93dcfb56daa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ff1f2f8b-f1b1-4cea-aa68-ff8e187199c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d16e409b-d70a-48ef-aea0-8a4247c92595","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-184756" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-184756
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-034220 --network=
E0110 08:44:14.650334    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-034220 --network=: (26.81304177s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-034220" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-034220
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-034220: (2.259103789s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-207864 --network=bridge
E0110 08:44:28.844524    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-207864 --network=bridge: (28.876811443s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-207864" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-207864
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-207864: (2.11047628s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                    
x
+
TestKicExistingNetwork (26.83s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 08:44:54.335787    4094 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 08:44:54.351430    4094 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 08:44:54.351500    4094 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 08:44:54.351517    4094 cli_runner.go:164] Run: docker network inspect existing-network
W0110 08:44:54.366350    4094 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 08:44:54.366379    4094 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 08:44:54.366396    4094 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 08:44:54.366508    4094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 08:44:54.382966    4094 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1cad6f167682 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:2e:00:65:f8:e1} reservation:<nil>}
I0110 08:44:54.383232    4094 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400213f200}
I0110 08:44:54.383261    4094 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 08:44:54.383309    4094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 08:44:54.439679    4094 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-703275 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-703275 --network=existing-network: (24.569598876s)
helpers_test.go:176: Cleaning up "existing-network-703275" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-703275
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-703275: (2.119647201s)
I0110 08:45:21.145600    4094 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.83s)

                                                
                                    
x
+
TestKicCustomSubnet (30.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-756456 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-756456 --subnet=192.168.60.0/24: (28.174733929s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-756456 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-756456" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-756456
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-756456: (2.220542554s)
--- PASS: TestKicCustomSubnet (30.42s)

                                                
                                    
x
+
TestKicStaticIP (32.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-655541 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-655541 --static-ip=192.168.200.200: (30.348074469s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-655541 ip
helpers_test.go:176: Cleaning up "static-ip-655541" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-655541
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-655541: (2.217165428s)
--- PASS: TestKicStaticIP (32.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (62.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-873922 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-873922 --driver=docker  --container-runtime=docker: (26.697531422s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-876605 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-876605 --driver=docker  --container-runtime=docker: (29.776813787s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-873922
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-876605
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-876605" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-876605
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-876605: (2.214141452s)
helpers_test.go:176: Cleaning up "first-873922" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-873922
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-873922: (2.233112824s)
--- PASS: TestMinikubeProfile (62.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-400396 --memory=3072 --mount-string /tmp/TestMountStartserial3271247209/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-400396 --memory=3072 --mount-string /tmp/TestMountStartserial3271247209/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.172407545s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-400396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-402340 --memory=3072 --mount-string /tmp/TestMountStartserial3271247209/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-402340 --memory=3072 --mount-string /tmp/TestMountStartserial3271247209/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.085449116s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-402340 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-400396 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-400396 --alsologtostderr -v=5: (1.569078103s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-402340 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-402340
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-402340: (1.283959639s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-402340
E0110 08:47:51.602053    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-402340: (7.524228611s)
--- PASS: TestMountStart/serial/RestartStopped (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-402340 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-444226 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-444226 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.181346572s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- rollout status deployment/busybox
E0110 08:49:28.841476    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-444226 -- rollout status deployment/busybox: (4.005546005s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-cnz6k -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-czx95 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-cnz6k -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-czx95 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-cnz6k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-czx95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-cnz6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-cnz6k -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-czx95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-444226 -- exec busybox-769dd8b7dd-czx95 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-444226 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-444226 -v=5 --alsologtostderr: (34.23941341s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-444226 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp testdata/cp-test.txt multinode-444226:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418438587/001/cp-test_multinode-444226.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226:/home/docker/cp-test.txt multinode-444226-m02:/home/docker/cp-test_multinode-444226_multinode-444226-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test_multinode-444226_multinode-444226-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226:/home/docker/cp-test.txt multinode-444226-m03:/home/docker/cp-test_multinode-444226_multinode-444226-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test_multinode-444226_multinode-444226-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp testdata/cp-test.txt multinode-444226-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418438587/001/cp-test_multinode-444226-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m02:/home/docker/cp-test.txt multinode-444226:/home/docker/cp-test_multinode-444226-m02_multinode-444226.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test_multinode-444226-m02_multinode-444226.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m02:/home/docker/cp-test.txt multinode-444226-m03:/home/docker/cp-test_multinode-444226-m02_multinode-444226-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test_multinode-444226-m02_multinode-444226-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp testdata/cp-test.txt multinode-444226-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418438587/001/cp-test_multinode-444226-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m03:/home/docker/cp-test.txt multinode-444226:/home/docker/cp-test_multinode-444226-m03_multinode-444226.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226 "sudo cat /home/docker/cp-test_multinode-444226-m03_multinode-444226.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 cp multinode-444226-m03:/home/docker/cp-test.txt multinode-444226-m02:/home/docker/cp-test_multinode-444226-m03_multinode-444226-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 ssh -n multinode-444226-m02 "sudo cat /home/docker/cp-test_multinode-444226-m03_multinode-444226-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-444226 node stop m03: (1.34270353s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-444226 status: exit status 7 (545.970972ms)

                                                
                                                
-- stdout --
	multinode-444226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-444226-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-444226-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr: exit status 7 (559.320914ms)

                                                
                                                
-- stdout --
	multinode-444226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-444226-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-444226-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:50:20.986517  172013 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:50:20.986666  172013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:50:20.986677  172013 out.go:374] Setting ErrFile to fd 2...
	I0110 08:50:20.986708  172013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:50:20.986985  172013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:50:20.987208  172013 out.go:368] Setting JSON to false
	I0110 08:50:20.987255  172013 mustload.go:66] Loading cluster: multinode-444226
	I0110 08:50:20.987345  172013 notify.go:221] Checking for updates...
	I0110 08:50:20.987712  172013 config.go:182] Loaded profile config "multinode-444226": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:50:20.987739  172013 status.go:174] checking status of multinode-444226 ...
	I0110 08:50:20.988323  172013 cli_runner.go:164] Run: docker container inspect multinode-444226 --format={{.State.Status}}
	I0110 08:50:21.013333  172013 status.go:371] multinode-444226 host status = "Running" (err=<nil>)
	I0110 08:50:21.013394  172013 host.go:66] Checking if "multinode-444226" exists ...
	I0110 08:50:21.013713  172013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-444226
	I0110 08:50:21.037487  172013 host.go:66] Checking if "multinode-444226" exists ...
	I0110 08:50:21.037847  172013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:50:21.037921  172013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-444226
	I0110 08:50:21.056747  172013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/multinode-444226/id_rsa Username:docker}
	I0110 08:50:21.158897  172013 ssh_runner.go:195] Run: systemctl --version
	I0110 08:50:21.165518  172013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:50:21.178480  172013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:50:21.245953  172013 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 08:50:21.234693289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:50:21.246594  172013 kubeconfig.go:125] found "multinode-444226" server: "https://192.168.67.2:8443"
	I0110 08:50:21.246644  172013 api_server.go:166] Checking apiserver status ...
	I0110 08:50:21.246703  172013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:50:21.261203  172013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2177/cgroup
	I0110 08:50:21.271783  172013 api_server.go:192] apiserver freezer: "8:freezer:/docker/43cdde0d9c13d1d54d19f8782fdecb45e35de9201f1f0b784ff77b604d121d7a/kubepods/burstable/pod225dfa175de30a0b22f146b0392e3c12/5db3b798f06d6cf7558360761f181694e929d84d668db21a699fe5be14d96a4e"
	I0110 08:50:21.271855  172013 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/43cdde0d9c13d1d54d19f8782fdecb45e35de9201f1f0b784ff77b604d121d7a/kubepods/burstable/pod225dfa175de30a0b22f146b0392e3c12/5db3b798f06d6cf7558360761f181694e929d84d668db21a699fe5be14d96a4e/freezer.state
	I0110 08:50:21.281718  172013 api_server.go:214] freezer state: "THAWED"
	I0110 08:50:21.281750  172013 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 08:50:21.292578  172013 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 08:50:21.292606  172013 status.go:463] multinode-444226 apiserver status = Running (err=<nil>)
	I0110 08:50:21.292617  172013 status.go:176] multinode-444226 status: &{Name:multinode-444226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:50:21.292635  172013 status.go:174] checking status of multinode-444226-m02 ...
	I0110 08:50:21.292965  172013 cli_runner.go:164] Run: docker container inspect multinode-444226-m02 --format={{.State.Status}}
	I0110 08:50:21.311469  172013 status.go:371] multinode-444226-m02 host status = "Running" (err=<nil>)
	I0110 08:50:21.311491  172013 host.go:66] Checking if "multinode-444226-m02" exists ...
	I0110 08:50:21.311820  172013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-444226-m02
	I0110 08:50:21.329424  172013 host.go:66] Checking if "multinode-444226-m02" exists ...
	I0110 08:50:21.329813  172013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:50:21.329860  172013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-444226-m02
	I0110 08:50:21.347851  172013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/multinode-444226-m02/id_rsa Username:docker}
	I0110 08:50:21.456673  172013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:50:21.469897  172013 status.go:176] multinode-444226-m02 status: &{Name:multinode-444226-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:50:21.469930  172013 status.go:174] checking status of multinode-444226-m03 ...
	I0110 08:50:21.470228  172013 cli_runner.go:164] Run: docker container inspect multinode-444226-m03 --format={{.State.Status}}
	I0110 08:50:21.487477  172013 status.go:371] multinode-444226-m03 host status = "Stopped" (err=<nil>)
	I0110 08:50:21.487498  172013 status.go:384] host is not running, skipping remaining checks
	I0110 08:50:21.487505  172013 status.go:176] multinode-444226-m03 status: &{Name:multinode-444226-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-444226 node start m03 -v=5 --alsologtostderr: (8.671099306s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-444226
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-444226
E0110 08:50:51.885650    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-444226: (23.08410395s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-444226 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-444226 --wait=true -v=5 --alsologtostderr: (51.349238196s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-444226
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-444226 node delete m03: (5.076912813s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-444226 stop: (21.949034081s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-444226 status: exit status 7 (91.386332ms)

                                                
                                                
-- stdout --
	multinode-444226
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-444226-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr: exit status 7 (98.159091ms)

                                                
                                                
-- stdout --
	multinode-444226
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-444226-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:52:13.402946  185697 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:52:13.403064  185697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:52:13.403074  185697 out.go:374] Setting ErrFile to fd 2...
	I0110 08:52:13.403079  185697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:52:13.403308  185697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:52:13.403485  185697 out.go:368] Setting JSON to false
	I0110 08:52:13.403519  185697 mustload.go:66] Loading cluster: multinode-444226
	I0110 08:52:13.403566  185697 notify.go:221] Checking for updates...
	I0110 08:52:13.403897  185697 config.go:182] Loaded profile config "multinode-444226": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:52:13.404005  185697 status.go:174] checking status of multinode-444226 ...
	I0110 08:52:13.404513  185697 cli_runner.go:164] Run: docker container inspect multinode-444226 --format={{.State.Status}}
	I0110 08:52:13.424311  185697 status.go:371] multinode-444226 host status = "Stopped" (err=<nil>)
	I0110 08:52:13.424335  185697 status.go:384] host is not running, skipping remaining checks
	I0110 08:52:13.424343  185697 status.go:176] multinode-444226 status: &{Name:multinode-444226 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:52:13.424374  185697 status.go:174] checking status of multinode-444226-m02 ...
	I0110 08:52:13.424722  185697 cli_runner.go:164] Run: docker container inspect multinode-444226-m02 --format={{.State.Status}}
	I0110 08:52:13.453667  185697 status.go:371] multinode-444226-m02 host status = "Stopped" (err=<nil>)
	I0110 08:52:13.453691  185697 status.go:384] host is not running, skipping remaining checks
	I0110 08:52:13.453698  185697 status.go:176] multinode-444226-m02 status: &{Name:multinode-444226-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-444226 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0110 08:52:51.602639    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-444226 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.033778497s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-444226 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-444226
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-444226-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-444226-m02 --driver=docker  --container-runtime=docker: exit status 14 (111.618986ms)

                                                
                                                
-- stdout --
	* [multinode-444226-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-444226-m02' is duplicated with machine name 'multinode-444226-m02' in profile 'multinode-444226'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-444226-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-444226-m03 --driver=docker  --container-runtime=docker: (29.623724688s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-444226
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-444226: exit status 80 (316.476802ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-444226 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-444226-m03 already exists in multinode-444226-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-444226-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-444226-m03: (2.333263252s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.44s)

                                                
                                    
x
+
TestScheduledStopUnix (98.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-817125 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-817125 --memory=3072 --driver=docker  --container-runtime=docker: (25.628805486s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817125 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:54:06.538015  199514 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:06.538180  199514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:06.538194  199514 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:06.538200  199514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:06.538471  199514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:54:06.538743  199514 out.go:368] Setting JSON to false
	I0110 08:54:06.538864  199514 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:06.539239  199514 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:54:06.539328  199514 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/scheduled-stop-817125/config.json ...
	I0110 08:54:06.539513  199514 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:06.539635  199514 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-817125 -n scheduled-stop-817125
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817125 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:54:06.997632  199604 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:06.997934  199604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:06.997944  199604 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:06.997950  199604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:06.998409  199604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:54:06.998693  199604 out.go:368] Setting JSON to false
	I0110 08:54:06.998881  199604 daemonize_unix.go:73] killing process 199529 as it is an old scheduled stop
	I0110 08:54:07.001497  199604 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:07.001961  199604 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:54:07.002084  199604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/scheduled-stop-817125/config.json ...
	I0110 08:54:07.002307  199604 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:07.002471  199604 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 08:54:07.008221    4094 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/scheduled-stop-817125/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817125 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E0110 08:54:28.845900    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817125 -n scheduled-stop-817125
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817125
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817125 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:54:32.898882  200336 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:32.899012  200336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:32.899022  200336 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:32.899028  200336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:32.899315  200336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
	I0110 08:54:32.899590  200336 out.go:368] Setting JSON to false
	I0110 08:54:32.899714  200336 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:32.900052  200336 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 08:54:32.900138  200336 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/scheduled-stop-817125/config.json ...
	I0110 08:54:32.900384  200336 mustload.go:66] Loading cluster: scheduled-stop-817125
	I0110 08:54:32.900519  200336 config.go:182] Loaded profile config "scheduled-stop-817125": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817125
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-817125: exit status 7 (75.468332ms)

                                                
                                                
-- stdout --
	scheduled-stop-817125
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817125 -n scheduled-stop-817125
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817125 -n scheduled-stop-817125: exit status 7 (72.905145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-817125" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-817125
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-817125: (1.64653375s)
--- PASS: TestScheduledStopUnix (98.89s)

                                                
                                    
x
+
TestSkaffold (134.98s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2482928795 version
skaffold_test.go:63: skaffold version: v2.17.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-777978 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-777978 --memory=3072 --driver=docker  --container-runtime=docker: (28.764009725s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2482928795 run --minikube-profile skaffold-777978 --kube-context skaffold-777978 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2482928795 run --minikube-profile skaffold-777978 --kube-context skaffold-777978 --status-check=true --port-forward=false --interactive=false: (1m30.597482499s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-7c4d54fd9d-n9pfp" [53f22731-bcb9-43ed-85c7-d8c336ce6ad7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00312476s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-7f6ccf6687-wtp8c" [5a9743e9-5d3c-491e-b342-95c91deb2a0b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003765325s
helpers_test.go:176: Cleaning up "skaffold-777978" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-777978
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-777978: (3.125473102s)
--- PASS: TestSkaffold (134.98s)

                                                
                                    
x
+
TestInsufficientStorage (10.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-595640 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-595640 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.417838814s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d841c4b-267f-4956-ac01-37ab9d8a8292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-595640] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a49bede-466c-4eb9-a942-fd285f921f8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"781c3fc2-2f99-40df-b2ca-655ba14ab9e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c6210b7a-18f2-45bf-86d3-459754524a85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig"}}
	{"specversion":"1.0","id":"c73a50a7-1105-47c4-9d66-4e9bf1c0f903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube"}}
	{"specversion":"1.0","id":"af7584a7-b425-494b-8bc8-9a534b2af884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"196b7407-3b48-440e-a03e-32a2814a9f48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35e7bc32-640d-4fa4-baeb-536cea554721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"47c2096d-ca29-4462-8a3d-66c081e77501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"31af3a4d-5182-43f9-91d2-871cc0fdf249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfb1cef2-6cfc-48cf-85ac-8518ec1ed56b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"14e850f8-e532-4e58-ace0-d2b438c754f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-595640\" primary control-plane node in \"insufficient-storage-595640\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1603fdcc-fde7-4dda-9c0d-4516e9fd3e74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"53ea4fdd-cab0-4b19-b314-fc7205e37081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"87b49b21-6851-4e7d-8a60-43c5c762b892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-595640 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-595640 --output=json --layout=cluster: exit status 7 (290.871473ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-595640","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-595640","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:57:43.386707  210884 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-595640" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-595640 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-595640 --output=json --layout=cluster: exit status 7 (301.968741ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-595640","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-595640","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:57:43.687345  210951 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-595640" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig
	E0110 08:57:43.697028  210951 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/insufficient-storage-595640/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-595640" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-595640
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-595640: (1.814693427s)
--- PASS: TestInsufficientStorage (10.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2825863442 start -p running-upgrade-943572 --memory=3072 --vm-driver=docker  --container-runtime=docker
E0110 09:09:28.843085    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2825863442 start -p running-upgrade-943572 --memory=3072 --vm-driver=docker  --container-runtime=docker: (56.481189654s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-943572 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-943572 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.572588848s)
helpers_test.go:176: Cleaning up "running-upgrade-943572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-943572
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-943572: (2.374039808s)
--- PASS: TestRunningBinaryUpgrade (104.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (186.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.572647352s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-670199 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-670199 --alsologtostderr: (1.635290106s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-670199 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-670199 status --format={{.Host}}: exit status 7 (111.119758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m48.604376843s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-670199 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (97.275047ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-670199] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-670199
	    minikube start -p kubernetes-upgrade-670199 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6701992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-670199 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 09:14:28.842724    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.741255973s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-670199" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-670199
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-670199: (2.439226335s)
--- PASS: TestKubernetesUpgrade (186.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (85.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3721236766 start -p missing-upgrade-589001 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3721236766 start -p missing-upgrade-589001 --memory=3072 --driver=docker  --container-runtime=docker: (33.838435796s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-589001
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-589001: (1.718380545s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-589001
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-589001 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-589001 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.475832695s)
helpers_test.go:176: Cleaning up "missing-upgrade-589001" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-589001
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-589001: (2.619364594s)
--- PASS: TestMissingContainerUpgrade (85.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (115.297142ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-189523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189523 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0110 08:57:51.602507    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189523 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.869539131s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-189523 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.291995396s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-189523 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-189523 status -o json: exit status 2 (317.533922ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-189523","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-189523
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-189523: (1.816747272s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189523 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (9.467259844s)
--- PASS: TestNoKubernetes/serial/Start (9.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-189523 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-189523 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.178325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-189523
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-189523: (1.336982615s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189523 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189523 --driver=docker  --container-runtime=docker: (8.191758563s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-189523 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-189523 "sudo systemctl is-active --quiet service kubelet": exit status 1 (456.141277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (331.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2500187277 start -p stopped-upgrade-018829 --memory=3072 --vm-driver=docker  --container-runtime=docker
E0110 09:12:20.535310    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2500187277 start -p stopped-upgrade-018829 --memory=3072 --vm-driver=docker  --container-runtime=docker: (41.937566141s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2500187277 -p stopped-upgrade-018829 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2500187277 -p stopped-upgrade-018829 stop: (12.051412898s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-018829 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 09:12:51.602058    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-018829 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m37.997333186s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (331.99s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (80.25s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-132485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-132485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m13.252401963s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-132485 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-132485
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-132485: (6.073218826s)
--- PASS: TestPreload/Start-NoPreload-PullImage (80.25s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (50.1s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-132485 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-132485 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.84847996s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-132485 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (50.10s)

                                                
                                    
x
+
TestPause/serial/Start (74.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-751940 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0110 09:17:20.534707    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-751940 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m14.560695375s)
--- PASS: TestPause/serial/Start (74.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-018829
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-018829: (1.208218704s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0110 09:17:34.652857    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:17:51.602638    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m11.138845487s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-751940 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-751940 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.127089984s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-632912 "pgrep -a kubelet"
I0110 09:18:36.775382    4094 config.go:182] Loaded profile config "auto-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fzll2" [0a690bc3-ca8d-4f76-b439-8079a2aa0560] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fzll2" [0a690bc3-ca8d-4f76-b439-8079a2aa0560] Running
E0110 09:18:43.581201    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006364043s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-751940 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-751940 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-751940 --output=json --layout=cluster: exit status 2 (460.108921ms)

                                                
                                                
-- stdout --
	{"Name":"pause-751940","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-751940","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-751940 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-751940 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-751940 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-751940 --alsologtostderr -v=5: (2.297341125s)
--- PASS: TestPause/serial/DeletePaused (2.30s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-751940
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-751940: exit status 1 (15.867279ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-751940: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.700201947s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0110 09:19:28.841098    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m0.049345041s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-t8zvf" [a4234b86-9868-4efa-8e5c-838a522ba617] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007396781s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-632912 "pgrep -a kubelet"
I0110 09:19:57.811354    4094 config.go:182] Loaded profile config "kindnet-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-632912 replace --force -f testdata/netcat-deployment.yaml
I0110 09:19:58.408720    4094 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-frqrm" [6ff21a36-09c5-4f57-95de-03557ee79cea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-frqrm" [6ff21a36-09c5-4f57-95de-03557ee79cea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003741157s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-mxv6h" [fd1960c5-c92f-4815-a7c4-57afd80fffcf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004885653s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-632912 "pgrep -a kubelet"
I0110 09:20:20.057643    4094 config.go:182] Loaded profile config "calico-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-6mplv" [1a361f9e-2d92-4f21-a1c2-e204966ce432] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-6mplv" [1a361f9e-2d92-4f21-a1c2-e204966ce432] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003785389s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (56.777873412s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (74.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m14.650989744s)
--- PASS: TestNetworkPlugins/group/false/Start (74.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-632912 "pgrep -a kubelet"
I0110 09:21:31.905876    4094 config.go:182] Loaded profile config "custom-flannel-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zz64b" [d2228d27-4193-430a-8f42-ab4a44da7716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zz64b" [d2228d27-4193-430a-8f42-ab4a44da7716] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003661981s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (42.70129625s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-632912 "pgrep -a kubelet"
I0110 09:22:14.630151    4094 config.go:182] Loaded profile config "false-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mdjd6" [886968e1-7a7a-4d2d-97da-c8071e84b57a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mdjd6" [886968e1-7a7a-4d2d-97da-c8071e84b57a] Running
E0110 09:22:20.535176    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.0052659s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-632912 "pgrep -a kubelet"
I0110 09:22:48.974291    4094 config.go:182] Loaded profile config "enable-default-cni-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-j9blw" [f28cdb6e-a3a0-4155-a015-aa917c07e991] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-j9blw" [f28cdb6e-a3a0-4155-a015-aa917c07e991] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.003887079s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0110 09:22:51.602870    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (53.630600654s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0110 09:23:37.025825    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.031131    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.041404    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.061663    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.105486    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.186340    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.347442    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:37.668474    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:38.309179    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:39.590102    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:23:42.150455    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m11.963362435s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-pfvkg" [79a567ca-c9aa-4e06-b6b2-2f00679048e1] Running
E0110 09:23:47.271045    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003677557s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-632912 "pgrep -a kubelet"
I0110 09:23:50.313870    4094 config.go:182] Loaded profile config "flannel-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-t5xjg" [0c1ad2a3-fb6f-4de9-baad-9e1f80eb3b86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-t5xjg" [0c1ad2a3-fb6f-4de9-baad-9e1f80eb3b86] Running
E0110 09:23:57.511709    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005770332s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (47.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0110 09:24:28.840792    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-632912 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (47.452383564s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (47.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-632912 "pgrep -a kubelet"
I0110 09:24:40.561856    4094 config.go:182] Loaded profile config "bridge-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-632912 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-xhq5t" [9c3ef32b-6e80-4f0f-bd02-599bf2ffd807] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-xhq5t" [9c3ef32b-6e80-4f0f-bd02-599bf2ffd807] Running
E0110 09:24:51.364154    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.369599    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.379872    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.400244    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.440545    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.520806    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.681199    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003986821s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-632912 exec deployment/netcat -- nslookup kubernetes.default
E0110 09:24:52.001640    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0110 09:24:52.641845    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-632912 "pgrep -a kubelet"
E0110 09:25:13.619116    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:13.624424    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:13.635226    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:13.655474    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:13.695752    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:13.776610    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0110 09:25:13.861564    4094 config.go:182] Loaded profile config "kubenet-632912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-632912 replace --force -f testdata/netcat-deployment.yaml
E0110 09:25:13.937616    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:14.259000    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7zj9h" [5351871c-cae8-469d-a8d5-7ade261433e4] Pending
E0110 09:25:14.899204    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-7zj9h" [5351871c-cae8-469d-a8d5-7ade261433e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005909497s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.44s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.31s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-187081 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0110 09:25:16.180004    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:18.740502    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-187081 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.122426884s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-187081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-187081
--- PASS: TestPreload/PreloadSrc/gcs (4.31s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.31s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-017215 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-017215 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.12701688s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-017215" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-017215
--- PASS: TestPreload/PreloadSrc/github (4.31s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.5s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-438817 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0110 09:25:23.861235    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-438817" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-438817
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (94.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-575078 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-575078 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m34.837562748s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (94.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-632912 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-632912 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.22s)
E0110 09:31:27.722920    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:32.229216    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-619852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:25:54.582824    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:13.287393    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:20.873176    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.229496    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.234758    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.244998    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.265288    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.305538    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.385763    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.546204    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:32.866751    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:33.507451    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:34.787951    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:35.543913    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:37.348218    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:42.469139    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:52.709576    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-619852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m19.903235323s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-575078 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6c7848ff-6596-4371-8514-422f10265ab5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6c7848ff-6596-4371-8514-422f10265ab5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00372043s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-575078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-619852 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [309b4658-cd8d-42a3-89c1-d208e39f2bfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [309b4658-cd8d-42a3-89c1-d208e39f2bfc] Running
E0110 09:27:13.189784    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:14.918526    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:14.923779    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:14.934044    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:14.954357    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:14.995105    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:15.075409    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:15.235838    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:15.556381    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:16.197420    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:17.478576    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003894553s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-619852 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-575078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-575078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090539662s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-575078 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-575078 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-575078 --alsologtostderr -v=3: (11.383860212s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-619852 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0110 09:27:20.038754    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-619852 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-619852 --alsologtostderr -v=3
E0110 09:27:20.534523    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-619852 --alsologtostderr -v=3: (11.429412715s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-575078 -n old-k8s-version-575078
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-575078 -n old-k8s-version-575078: exit status 7 (67.029383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-575078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (30.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-575078 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0110 09:27:25.159502    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-575078 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (30.505080596s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-575078 -n old-k8s-version-575078
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (30.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-619852 -n no-preload-619852
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-619852 -n no-preload-619852: exit status 7 (169.099704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-619852 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-619852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:27:35.207894    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:35.400574    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.252065    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.257328    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.267663    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.287950    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.328214    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.409256    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.569495    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:49.889873    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:50.530531    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:51.602480    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:51.811459    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-619852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (58.189083474s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-619852 -n no-preload-619852
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0110 09:27:54.150785    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-kstbn" [f77e2787-192d-4d81-acaf-b3b3bce5f3e6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0110 09:27:54.371844    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:55.880783    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:57.464134    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:59.493032    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-kstbn" [f77e2787-192d-4d81-acaf-b3b3bce5f3e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004019109s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-kstbn" [f77e2787-192d-4d81-acaf-b3b3bce5f3e6] Running
E0110 09:28:09.733727    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004001713s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-575078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-575078 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-575078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-575078 --alsologtostderr -v=1: (1.222139816s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-575078 -n old-k8s-version-575078
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-575078 -n old-k8s-version-575078: exit status 2 (339.272053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-575078 -n old-k8s-version-575078
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-575078 -n old-k8s-version-575078: exit status 2 (367.843774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-575078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-575078 -n old-k8s-version-575078
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-575078 -n old-k8s-version-575078
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-749989 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:28:30.214648    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-749989 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m8.851729298s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-d5f6j" [d07e39f0-a776-4411-9c68-143a9b95dd87] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003816054s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-d5f6j" [d07e39f0-a776-4411-9c68-143a9b95dd87] Running
E0110 09:28:36.841188    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:37.025094    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00420118s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-619852 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-619852 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-619852 --alsologtostderr -v=1
E0110 09:28:43.877004    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:43.882628    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:43.892896    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:43.913193    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:43.953514    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:28:44.033991    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-619852 -n no-preload-619852
E0110 09:28:44.194099    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-619852 -n no-preload-619852: exit status 2 (383.894792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-619852 -n no-preload-619852
E0110 09:28:44.516146    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-619852 -n no-preload-619852: exit status 2 (423.502468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-619852 --alsologtostderr -v=1
E0110 09:28:45.157219    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-619852 -n no-preload-619852
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-619852 -n no-preload-619852
E0110 09:28:46.438074    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-605650 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:28:54.119526    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:04.360687    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:04.713318    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/auto-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:11.174876    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:16.071039    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:24.841511    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-605650 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m8.096253693s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-749989 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0564dcda-b516-48af-b4f1-7c0246f0ceb7] Pending
helpers_test.go:353: "busybox" [0564dcda-b516-48af-b4f1-7c0246f0ceb7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0110 09:29:28.841412    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [0564dcda-b516-48af-b4f1-7c0246f0ceb7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004107421s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-749989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-749989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-749989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-749989 --alsologtostderr -v=3
E0110 09:29:40.931640    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:40.936986    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:40.947314    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:40.968238    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:41.008638    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:41.089009    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:41.249470    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:41.570070    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:42.210379    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:43.490689    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:46.050956    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-749989 --alsologtostderr -v=3: (11.269451822s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-749989 -n embed-certs-749989
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-749989 -n embed-certs-749989: exit status 7 (76.870476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-749989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-749989 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:29:51.171185    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:29:51.363713    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-749989 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (55.689519527s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-749989 -n embed-certs-749989
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-605650 create -f testdata/busybox.yaml
E0110 09:29:58.761900    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/false-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4218ada8-04ac-4097-9811-f5e1113d804f] Pending
helpers_test.go:353: "busybox" [4218ada8-04ac-4097-9811-f5e1113d804f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0110 09:30:01.411677    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [4218ada8-04ac-4097-9811-f5e1113d804f] Running
E0110 09:30:05.802026    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004308229s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-605650 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-605650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-605650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192933219s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-605650 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-605650 --alsologtostderr -v=3
E0110 09:30:13.619357    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.260148    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.265385    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.275887    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.296161    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.336726    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.417007    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.577385    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:14.897988    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:15.539125    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:16.820042    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:19.048700    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kindnet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:19.380476    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:21.892309    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-605650 --alsologtostderr -v=3: (11.618295985s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650: exit status 7 (74.217372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-605650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (29.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-605650 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:30:24.501262    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:33.095756    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/enable-default-cni-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:34.742265    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:41.305173    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/calico-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-605650 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (29.488432964s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (29.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ttccp" [e0336fbb-9ec6-4b58-9e9e-c088d354361b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003221475s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ttccp" [e0336fbb-9ec6-4b58-9e9e-c088d354361b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003652038s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-749989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w7z4x" [f5cfe7e1-2376-43ea-a59c-80efdc28d827] Running
E0110 09:30:55.223279    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00390353s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-749989 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-749989 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-749989 -n embed-certs-749989
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-749989 -n embed-certs-749989: exit status 2 (353.702019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-749989 -n embed-certs-749989
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-749989 -n embed-certs-749989: exit status 2 (344.430392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-749989 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-749989 -n embed-certs-749989
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-749989 -n embed-certs-749989
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w7z4x" [f5cfe7e1-2376-43ea-a59c-80efdc28d827] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003538584s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-605650 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-332116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:31:02.853182    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/bridge-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-332116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (33.297712722s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-605650 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-605650 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650: exit status 2 (428.994884ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650: exit status 2 (367.077007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-605650 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-605650 -n default-k8s-diff-port-605650
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-332116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0110 09:31:36.183545    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/kubenet-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-332116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166758784s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-332116 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-332116 --alsologtostderr -v=3: (11.250234767s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-332116 -n newest-cni-332116
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-332116 -n newest-cni-332116: exit status 7 (64.70275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-332116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-332116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 09:31:59.244490    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.249723    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.260575    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.280817    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.321660    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.402304    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.562482    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.882724    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:31:59.912357    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/custom-flannel-632912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:32:00.523692    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:32:01.804569    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:32:04.365527    4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/old-k8s-version-575078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-332116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (16.095181884s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-332116 -n newest-cni-332116
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-332116 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-332116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-332116 -n newest-cni-332116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-332116 -n newest-cni-332116: exit status 2 (324.094079ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-332116 -n newest-cni-332116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-332116 -n newest-cni-332116: exit status 2 (346.047297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-332116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-332116 -n newest-cni-332116
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-332116 -n newest-cni-332116
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-408977 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-408977" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-408977
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-632912 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-632912" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-632912

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-632912" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-632912"

                                                
                                                
----------------------- debugLogs end: cilium-632912 [took: 5.350437795s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-632912" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-632912
--- SKIP: TestNetworkPlugins/group/cilium (5.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-888828" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-888828
--- SKIP: TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                    
Copied to clipboard