Test Report: Docker_Linux 20317

                    
                      bb508b30435b2a744d00b2f75d06f98d338973f1:2025-01-27:38093
                    
                

Test fail (1/345)

Order failed test Duration
22 TestOffline 904.14
x
+
TestOffline (904.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: signal: killed (15m0.005840455s)

                                                
                                                
-- stdout --
	* [offline-docker-649313] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "offline-docker-649313" primary control-plane node in "offline-docker-649313" cluster
	* Pulling base image v0.0.46 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
	  - env HTTP_PROXY=172.16.1.1:1
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:43:50.252629  569503 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:43:50.252907  569503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:43:50.252917  569503 out.go:358] Setting ErrFile to fd 2...
	I0127 12:43:50.252924  569503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:43:50.253117  569503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:43:50.253774  569503 out.go:352] Setting JSON to false
	I0127 12:43:50.254766  569503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30377,"bootTime":1737951453,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:43:50.254839  569503 start.go:139] virtualization: kvm guest
	I0127 12:43:50.257110  569503 out.go:177] * [offline-docker-649313] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:43:50.259049  569503 notify.go:220] Checking for updates...
	I0127 12:43:50.259073  569503 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:43:50.260819  569503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:43:50.262145  569503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:43:50.263276  569503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:43:50.264534  569503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:43:50.266222  569503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:43:50.268099  569503 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:43:50.296647  569503 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:43:50.296793  569503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:43:50.360029  569503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-01-27 12:43:50.346883761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:43:50.360194  569503 docker.go:318] overlay module found
	I0127 12:43:50.361842  569503 out.go:177] * Using the docker driver based on user configuration
	I0127 12:43:50.363056  569503 start.go:297] selected driver: docker
	I0127 12:43:50.363071  569503 start.go:901] validating driver "docker" against <nil>
	I0127 12:43:50.363097  569503 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:43:50.364273  569503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:43:50.436252  569503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-01-27 12:43:50.422490841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:43:50.436494  569503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:43:50.436864  569503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:43:50.438566  569503 out.go:177] * Using Docker driver with root privileges
	I0127 12:43:50.440226  569503 cni.go:84] Creating CNI manager for ""
	I0127 12:43:50.440322  569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 12:43:50.440333  569503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:43:50.440444  569503 start.go:340] cluster config:
	{Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:43:50.442019  569503 out.go:177] * Starting "offline-docker-649313" primary control-plane node in "offline-docker-649313" cluster
	I0127 12:43:50.443335  569503 cache.go:121] Beginning downloading kic base image for docker with docker
	I0127 12:43:50.444678  569503 out.go:177] * Pulling base image v0.0.46 ...
	I0127 12:43:50.445860  569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:43:50.445918  569503 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 12:43:50.445951  569503 cache.go:56] Caching tarball of preloaded images
	I0127 12:43:50.445997  569503 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:43:50.446088  569503 preload.go:172] Found /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:43:50.446119  569503 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:43:50.446683  569503 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json ...
	I0127 12:43:50.446727  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json: {Name:mk9ddecbecdff2b7295ef3347202aeeaf53c675e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:43:50.480974  569503 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 12:43:50.480997  569503 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 12:43:50.481019  569503 cache.go:227] Successfully downloaded all kic artifacts
	I0127 12:43:50.481063  569503 start.go:360] acquireMachinesLock for offline-docker-649313: {Name:mkc0c4b7197804f1697dc3869952ab8c5283ac8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:43:50.481188  569503 start.go:364] duration metric: took 99.534µs to acquireMachinesLock for "offline-docker-649313"
	I0127 12:43:50.481246  569503 start.go:93] Provisioning new machine with config: &{Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:43:50.481359  569503 start.go:125] createHost starting for "" (driver="docker")
	I0127 12:43:50.483541  569503 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0127 12:43:50.483893  569503 start.go:159] libmachine.API.Create for "offline-docker-649313" (driver="docker")
	I0127 12:43:50.483935  569503 client.go:168] LocalClient.Create starting
	I0127 12:43:50.484006  569503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem
	I0127 12:43:50.484049  569503 main.go:141] libmachine: Decoding PEM data...
	I0127 12:43:50.484070  569503 main.go:141] libmachine: Parsing certificate...
	I0127 12:43:50.484166  569503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem
	I0127 12:43:50.484221  569503 main.go:141] libmachine: Decoding PEM data...
	I0127 12:43:50.484241  569503 main.go:141] libmachine: Parsing certificate...
	I0127 12:43:50.484720  569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 12:43:50.512526  569503 cli_runner.go:211] docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 12:43:50.512625  569503 network_create.go:284] running [docker network inspect offline-docker-649313] to gather additional debugging logs...
	I0127 12:43:50.512647  569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313
	W0127 12:43:50.537912  569503 cli_runner.go:211] docker network inspect offline-docker-649313 returned with exit code 1
	I0127 12:43:50.537941  569503 network_create.go:287] error running [docker network inspect offline-docker-649313]: docker network inspect offline-docker-649313: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-649313 not found
	I0127 12:43:50.537973  569503 network_create.go:289] output of [docker network inspect offline-docker-649313]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-649313 not found
	
	** /stderr **
	I0127 12:43:50.538265  569503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:43:50.559488  569503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a67733940b1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:47:92:de:9e} reservation:<nil>}
	I0127 12:43:50.560755  569503 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-526e8be49203 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:00:a4:5e:8f} reservation:<nil>}
	I0127 12:43:50.562386  569503 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1505344accd1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ee:63:1d:4f} reservation:<nil>}
	I0127 12:43:50.563769  569503 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001710c60}
	I0127 12:43:50.563805  569503 network_create.go:124] attempt to create docker network offline-docker-649313 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 12:43:50.564010  569503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-649313 offline-docker-649313
	I0127 12:43:50.644070  569503 network_create.go:108] docker network offline-docker-649313 192.168.76.0/24 created
	I0127 12:43:50.644105  569503 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-649313" container
	I0127 12:43:50.644187  569503 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 12:43:50.665074  569503 cli_runner.go:164] Run: docker volume create offline-docker-649313 --label name.minikube.sigs.k8s.io=offline-docker-649313 --label created_by.minikube.sigs.k8s.io=true
	I0127 12:43:50.688658  569503 oci.go:103] Successfully created a docker volume offline-docker-649313
	I0127 12:43:50.688738  569503 cli_runner.go:164] Run: docker run --rm --name offline-docker-649313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --entrypoint /usr/bin/test -v offline-docker-649313:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 12:43:52.136662  569503 cli_runner.go:217] Completed: docker run --rm --name offline-docker-649313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --entrypoint /usr/bin/test -v offline-docker-649313:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (1.4478855s)
	I0127 12:43:52.136699  569503 oci.go:107] Successfully prepared a docker volume offline-docker-649313
	I0127 12:43:52.136747  569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:43:52.136776  569503 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 12:43:52.136871  569503 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-649313:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 12:44:00.425946  569503 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-649313:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (8.289024823s)
	I0127 12:44:00.425979  569503 kic.go:203] duration metric: took 8.289197577s to extract preloaded images to volume ...
	W0127 12:44:00.426091  569503 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 12:44:00.426181  569503 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 12:44:00.477148  569503 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-649313 --name offline-docker-649313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-649313 --network offline-docker-649313 --ip 192.168.76.2 --volume offline-docker-649313:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 12:44:00.836408  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Running}}
	I0127 12:44:00.856034  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:00.877164  569503 cli_runner.go:164] Run: docker exec offline-docker-649313 stat /var/lib/dpkg/alternatives/iptables
	I0127 12:44:00.932757  569503 oci.go:144] the created container "offline-docker-649313" has a running status.
	I0127 12:44:00.932790  569503 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa...
	I0127 12:44:01.140333  569503 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 12:44:01.170653  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:01.193865  569503 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 12:44:01.193898  569503 kic_runner.go:114] Args: [docker exec --privileged offline-docker-649313 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 12:44:01.274058  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:01.321813  569503 machine.go:93] provisionDockerMachine start ...
	I0127 12:44:01.321918  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:01.339339  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:01.339566  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:01.339576  569503 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:44:01.547608  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-649313
	
	I0127 12:44:01.547637  569503 ubuntu.go:169] provisioning hostname "offline-docker-649313"
	I0127 12:44:01.547688  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:01.569390  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:01.569616  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:01.569626  569503 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-649313 && echo "offline-docker-649313" | sudo tee /etc/hostname
	I0127 12:44:01.711696  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-649313
	
	I0127 12:44:01.711781  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:01.731509  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:01.731734  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:01.731761  569503 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-649313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-649313/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-649313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:44:01.864345  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:44:01.864381  569503 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-304536/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-304536/.minikube}
	I0127 12:44:01.864406  569503 ubuntu.go:177] setting up certificates
	I0127 12:44:01.864418  569503 provision.go:84] configureAuth start
	I0127 12:44:01.864488  569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
	I0127 12:44:01.881735  569503 provision.go:143] copyHostCerts
	I0127 12:44:01.881802  569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem, removing ...
	I0127 12:44:01.881811  569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem
	I0127 12:44:01.881878  569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem (1082 bytes)
	I0127 12:44:01.881972  569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem, removing ...
	I0127 12:44:01.881980  569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem
	I0127 12:44:01.882002  569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem (1123 bytes)
	I0127 12:44:01.882070  569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem, removing ...
	I0127 12:44:01.882077  569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem
	I0127 12:44:01.882096  569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem (1679 bytes)
	I0127 12:44:01.882156  569503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem org=jenkins.offline-docker-649313 san=[127.0.0.1 192.168.76.2 localhost minikube offline-docker-649313]
	I0127 12:44:01.996130  569503 provision.go:177] copyRemoteCerts
	I0127 12:44:01.996210  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:44:01.996265  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:02.013334  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:02.104753  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 12:44:02.126022  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:44:02.147296  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:44:02.168824  569503 provision.go:87] duration metric: took 304.388928ms to configureAuth
	I0127 12:44:02.168866  569503 ubuntu.go:193] setting minikube options for container-runtime
	I0127 12:44:02.169078  569503 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:44:02.169150  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:02.185892  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:02.186087  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:02.186099  569503 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:44:02.312774  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 12:44:02.312800  569503 ubuntu.go:71] root file system type: overlay
	I0127 12:44:02.312942  569503 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:44:02.312999  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:02.330153  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:02.330380  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:02.330479  569503 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:44:02.468083  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:44:02.468200  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:02.488599  569503 main.go:141] libmachine: Using SSH client type: native
	I0127 12:44:02.488850  569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0127 12:44:02.488877  569503 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:44:03.211959  569503 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-12-17 15:44:19.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-01-27 12:44:02.461683918 +0000
	@@ -1,46 +1,50 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=HTTP_PROXY=172.16.1.1:1
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0127 12:44:03.212003  569503 machine.go:96] duration metric: took 1.890166528s to provisionDockerMachine
	I0127 12:44:03.212016  569503 client.go:171] duration metric: took 12.728069845s to LocalClient.Create
	I0127 12:44:03.212033  569503 start.go:167] duration metric: took 12.728144901s to libmachine.API.Create "offline-docker-649313"
	I0127 12:44:03.212044  569503 start.go:293] postStartSetup for "offline-docker-649313" (driver="docker")
	I0127 12:44:03.212058  569503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:44:03.212135  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:44:03.212238  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:03.230384  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:03.324960  569503 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:44:03.328108  569503 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 12:44:03.328146  569503 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 12:44:03.328165  569503 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 12:44:03.328202  569503 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 12:44:03.328222  569503 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/addons for local assets ...
	I0127 12:44:03.328283  569503 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/files for local assets ...
	I0127 12:44:03.328382  569503 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem -> 3113072.pem in /etc/ssl/certs
	I0127 12:44:03.328502  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:44:03.336281  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /etc/ssl/certs/3113072.pem (1708 bytes)
	I0127 12:44:03.358027  569503 start.go:296] duration metric: took 145.964414ms for postStartSetup
	I0127 12:44:03.358441  569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
	I0127 12:44:03.374811  569503 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json ...
	I0127 12:44:03.375068  569503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:44:03.375108  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:03.392625  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:03.482704  569503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 12:44:03.488697  569503 start.go:128] duration metric: took 13.007318103s to createHost
	I0127 12:44:03.488731  569503 start.go:83] releasing machines lock for "offline-docker-649313", held for 13.007527504s
	I0127 12:44:03.488827  569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
	I0127 12:44:03.525231  569503 out.go:177] * Found network options:
	I0127 12:44:03.526794  569503 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W0127 12:44:03.528273  569503 out.go:270] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.76.2).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.76.2).
	I0127 12:44:03.529591  569503 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0127 12:44:03.531012  569503 ssh_runner.go:195] Run: cat /version.json
	I0127 12:44:03.531075  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:03.531097  569503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:44:03.531172  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:03.558001  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:03.561246  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:03.756365  569503 ssh_runner.go:195] Run: systemctl --version
	I0127 12:44:03.761676  569503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:44:03.767034  569503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 12:44:03.801721  569503 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 12:44:03.801810  569503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:44:03.835659  569503 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 12:44:03.835701  569503 start.go:495] detecting cgroup driver to use...
	I0127 12:44:03.835743  569503 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:44:03.835896  569503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:44:03.855206  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:44:03.867636  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:44:03.880034  569503 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:44:03.880122  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:44:03.891467  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:44:03.903447  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:44:03.915635  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:44:03.926757  569503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:44:03.936401  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:44:03.948478  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:44:03.960367  569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:44:03.973538  569503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:44:03.983261  569503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:44:03.993564  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:04.087443  569503 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:44:04.200540  569503 start.go:495] detecting cgroup driver to use...
	I0127 12:44:04.200624  569503 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:44:04.200691  569503 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:44:04.218794  569503 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0127 12:44:04.218852  569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:44:04.231149  569503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:44:04.249596  569503 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:44:04.253119  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:44:04.263643  569503 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:44:04.285345  569503 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:44:04.389020  569503 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:44:04.495152  569503 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:44:04.495335  569503 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:44:04.517143  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:04.613860  569503 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:44:07.422519  569503 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.808620933s)
	I0127 12:44:07.422583  569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:44:07.437810  569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:44:07.452521  569503 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:44:07.560478  569503 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:44:07.662603  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:07.765306  569503 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:44:07.782492  569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:44:07.794422  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:07.909756  569503 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:44:07.994285  569503 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:44:07.994382  569503 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:44:07.998439  569503 start.go:563] Will wait 60s for crictl version
	I0127 12:44:07.998500  569503 ssh_runner.go:195] Run: which crictl
	I0127 12:44:08.002291  569503 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:44:08.046037  569503 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.1
	RuntimeApiVersion:  v1
	I0127 12:44:08.046101  569503 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:44:08.077605  569503 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:44:08.118153  569503 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
	I0127 12:44:08.124279  569503 out.go:177]   - env HTTP_PROXY=172.16.1.1:1
	I0127 12:44:08.125881  569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:44:08.150565  569503 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 12:44:08.155440  569503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:44:08.198992  569503 kubeadm.go:883] updating cluster {Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:44:08.199135  569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:44:08.199198  569503 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:44:08.237471  569503 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 12:44:08.237499  569503 docker.go:619] Images already preloaded, skipping extraction
	I0127 12:44:08.237568  569503 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:44:08.272557  569503 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 12:44:08.272584  569503 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:44:08.272596  569503 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.1 docker true true} ...
	I0127 12:44:08.272713  569503 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=offline-docker-649313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:44:08.272778  569503 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 12:44:08.340922  569503 cni.go:84] Creating CNI manager for ""
	I0127 12:44:08.340956  569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 12:44:08.340972  569503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:44:08.341001  569503 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-docker-649313 NodeName:offline-docker-649313 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:44:08.341246  569503 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "offline-docker-649313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:44:08.341786  569503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:44:08.360490  569503 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:44:08.360568  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:44:08.371926  569503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0127 12:44:08.393465  569503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:44:08.413898  569503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0127 12:44:08.434503  569503 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 12:44:08.439061  569503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:44:08.452614  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:08.557246  569503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:44:08.573110  569503 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313 for IP: 192.168.76.2
	I0127 12:44:08.573141  569503 certs.go:194] generating shared ca certs ...
	I0127 12:44:08.573169  569503 certs.go:226] acquiring lock for ca certs: {Name:mk1b16f74c226e2be2c446b7baf1d60d1399508e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.573329  569503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key
	I0127 12:44:08.573387  569503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key
	I0127 12:44:08.573403  569503 certs.go:256] generating profile certs ...
	I0127 12:44:08.573486  569503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key
	I0127 12:44:08.573514  569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt with IP's: []
	I0127 12:44:08.643889  569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt ...
	I0127 12:44:08.643920  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt: {Name:mk725ab3a72353fd47063c69e20c23063e887de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.644076  569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key ...
	I0127 12:44:08.644089  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key: {Name:mk2c66f11c5ec046d1666625044232dab99b9a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.644168  569503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907
	I0127 12:44:08.644208  569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 12:44:08.846066  569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 ...
	I0127 12:44:08.846104  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907: {Name:mkddcd8041839b23dcac607919086c1c2fffddd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.846289  569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907 ...
	I0127 12:44:08.846306  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907: {Name:mkf1a59adbd32ce8d4801c6bfca55113d1ba2215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.846416  569503 certs.go:381] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt
	I0127 12:44:08.846521  569503 certs.go:385] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907 -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key
	I0127 12:44:08.846603  569503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key
	I0127 12:44:08.846634  569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt with IP's: []
	I0127 12:44:08.938885  569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt ...
	I0127 12:44:08.938965  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt: {Name:mkb36f71941b55d205a664a8dfa613e34fda67b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.939161  569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key ...
	I0127 12:44:08.939208  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key: {Name:mk543fe641caf9d8b4f8f6176f603577f528c5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:08.939466  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem (1338 bytes)
	W0127 12:44:08.939535  569503 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307_empty.pem, impossibly tiny 0 bytes
	I0127 12:44:08.939552  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:44:08.939591  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:44:08.939633  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:44:08.939661  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem (1679 bytes)
	I0127 12:44:08.939715  569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem (1708 bytes)
	I0127 12:44:08.940591  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:44:08.966541  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:44:08.996736  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:44:09.023837  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:44:09.053893  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 12:44:09.096697  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:44:09.126435  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:44:09.155827  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:44:09.183391  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:44:09.208747  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem --> /usr/share/ca-certificates/311307.pem (1338 bytes)
	I0127 12:44:09.237220  569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /usr/share/ca-certificates/3113072.pem (1708 bytes)
	I0127 12:44:09.282530  569503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:44:09.302279  569503 ssh_runner.go:195] Run: openssl version
	I0127 12:44:09.308875  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3113072.pem && ln -fs /usr/share/ca-certificates/3113072.pem /etc/ssl/certs/3113072.pem"
	I0127 12:44:09.320299  569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3113072.pem
	I0127 12:44:09.324394  569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:14 /usr/share/ca-certificates/3113072.pem
	I0127 12:44:09.324447  569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3113072.pem
	I0127 12:44:09.333344  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3113072.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:44:09.345696  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:44:09.357051  569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:44:09.360940  569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:09 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:44:09.360995  569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:44:09.369576  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:44:09.380653  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/311307.pem && ln -fs /usr/share/ca-certificates/311307.pem /etc/ssl/certs/311307.pem"
	I0127 12:44:09.391295  569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/311307.pem
	I0127 12:44:09.395422  569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:14 /usr/share/ca-certificates/311307.pem
	I0127 12:44:09.395474  569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/311307.pem
	I0127 12:44:09.403659  569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/311307.pem /etc/ssl/certs/51391683.0"
	I0127 12:44:09.413600  569503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:44:09.417456  569503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:44:09.417511  569503 kubeadm.go:392] StartCluster: {Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:44:09.417648  569503 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:44:09.442963  569503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:44:09.456533  569503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:44:09.467137  569503 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 12:44:09.467205  569503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:44:09.478120  569503 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:44:09.478142  569503 kubeadm.go:157] found existing configuration files:
	
	I0127 12:44:09.478194  569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:44:09.488385  569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:44:09.488443  569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:44:09.498213  569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:44:09.508271  569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:44:09.508381  569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:44:09.518232  569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:44:09.528477  569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:44:09.528528  569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:44:09.537802  569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:44:09.546871  569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:44:09.546931  569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:44:09.559335  569503 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 12:44:09.609157  569503 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:44:09.609250  569503 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:44:09.637990  569503 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 12:44:09.638093  569503 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1074-gcp
	I0127 12:44:09.638140  569503 kubeadm.go:310] OS: Linux
	I0127 12:44:09.638202  569503 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 12:44:09.638264  569503 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 12:44:09.638327  569503 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 12:44:09.638394  569503 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 12:44:09.638459  569503 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 12:44:09.638525  569503 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 12:44:09.638588  569503 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 12:44:09.638655  569503 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 12:44:09.638723  569503 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 12:44:09.716704  569503 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:44:09.716872  569503 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:44:09.717074  569503 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:44:09.728893  569503 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:44:09.734593  569503 out.go:235]   - Generating certificates and keys ...
	I0127 12:44:09.734734  569503 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:44:09.734820  569503 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:44:09.877397  569503 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:44:10.352339  569503 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:44:10.884423  569503 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:44:10.984394  569503 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:44:11.086069  569503 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:44:11.086353  569503 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost offline-docker-649313] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 12:44:11.311337  569503 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:44:11.311737  569503 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost offline-docker-649313] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 12:44:11.474151  569503 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:44:12.240459  569503 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:44:12.481763  569503 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:44:12.481876  569503 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:44:12.854762  569503 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:44:13.071301  569503 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:44:13.189328  569503 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:44:13.359691  569503 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:44:13.493031  569503 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:44:13.493902  569503 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:44:13.497545  569503 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:44:13.500055  569503 out.go:235]   - Booting up control plane ...
	I0127 12:44:13.500215  569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:44:13.500330  569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:44:13.501135  569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:44:13.516510  569503 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:44:13.523184  569503 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:44:13.523253  569503 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:44:13.635973  569503 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:44:13.636121  569503 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:44:14.637433  569503 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001504662s
	I0127 12:44:14.637569  569503 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:44:22.139494  569503 kubeadm.go:310] [api-check] The API server is healthy after 7.502026371s
	I0127 12:44:22.152244  569503 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:44:22.162955  569503 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:44:22.184311  569503 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:44:22.184653  569503 kubeadm.go:310] [mark-control-plane] Marking the node offline-docker-649313 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:44:22.193280  569503 kubeadm.go:310] [bootstrap-token] Using token: 2uq4yb.npayucp7r9tcyqdf
	I0127 12:44:22.194774  569503 out.go:235]   - Configuring RBAC rules ...
	I0127 12:44:22.194934  569503 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:44:22.199370  569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:44:22.206358  569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:44:22.208921  569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:44:22.211572  569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:44:22.214418  569503 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:44:22.546062  569503 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:44:22.992475  569503 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:44:23.545784  569503 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:44:23.546801  569503 kubeadm.go:310] 
	I0127 12:44:23.546876  569503 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:44:23.546885  569503 kubeadm.go:310] 
	I0127 12:44:23.546977  569503 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:44:23.546995  569503 kubeadm.go:310] 
	I0127 12:44:23.547019  569503 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:44:23.547094  569503 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:44:23.547168  569503 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:44:23.547180  569503 kubeadm.go:310] 
	I0127 12:44:23.547224  569503 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:44:23.547231  569503 kubeadm.go:310] 
	I0127 12:44:23.547301  569503 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:44:23.547313  569503 kubeadm.go:310] 
	I0127 12:44:23.547401  569503 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:44:23.547517  569503 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:44:23.547625  569503 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:44:23.547639  569503 kubeadm.go:310] 
	I0127 12:44:23.547781  569503 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:44:23.547907  569503 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:44:23.547918  569503 kubeadm.go:310] 
	I0127 12:44:23.547991  569503 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2uq4yb.npayucp7r9tcyqdf \
	I0127 12:44:23.548080  569503 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a \
	I0127 12:44:23.548117  569503 kubeadm.go:310] 	--control-plane 
	I0127 12:44:23.548129  569503 kubeadm.go:310] 
	I0127 12:44:23.548271  569503 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:44:23.548284  569503 kubeadm.go:310] 
	I0127 12:44:23.548385  569503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2uq4yb.npayucp7r9tcyqdf \
	I0127 12:44:23.548472  569503 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a 
	I0127 12:44:23.550434  569503 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 12:44:23.550629  569503 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1074-gcp\n", err: exit status 1
	I0127 12:44:23.550735  569503 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:44:23.550758  569503 cni.go:84] Creating CNI manager for ""
	I0127 12:44:23.550776  569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 12:44:23.553335  569503 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:44:23.554564  569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:44:23.563295  569503 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:44:23.579503  569503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:44:23.579647  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes offline-docker-649313 minikube.k8s.io/updated_at=2025_01_27T12_44_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=offline-docker-649313 minikube.k8s.io/primary=true
	I0127 12:44:23.579654  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:23.588351  569503 ops.go:34] apiserver oom_adj: -16
	I0127 12:44:23.678210  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:24.178524  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:24.678472  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:25.179124  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:25.679198  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:26.179246  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:26.679080  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:27.178423  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:27.679194  569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:44:27.774956  569503 kubeadm.go:1113] duration metric: took 4.195360661s to wait for elevateKubeSystemPrivileges
	I0127 12:44:27.774994  569503 kubeadm.go:394] duration metric: took 18.357488722s to StartCluster
	I0127 12:44:27.775018  569503 settings.go:142] acquiring lock: {Name:mk55dbc0704f2f9d31c80856a45552242884623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:27.775096  569503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:44:27.776545  569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/kubeconfig: {Name:mk59d9102d1fe380f0fe65cd8c2acffe42bba157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:44:27.776825  569503 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:44:27.777057  569503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:44:27.777155  569503 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:44:27.777241  569503 addons.go:69] Setting storage-provisioner=true in profile "offline-docker-649313"
	I0127 12:44:27.777268  569503 addons.go:238] Setting addon storage-provisioner=true in "offline-docker-649313"
	I0127 12:44:27.777303  569503 host.go:66] Checking if "offline-docker-649313" exists ...
	I0127 12:44:27.777317  569503 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:44:27.777379  569503 addons.go:69] Setting default-storageclass=true in profile "offline-docker-649313"
	I0127 12:44:27.777396  569503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-docker-649313"
	I0127 12:44:27.777723  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:27.777844  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:27.778947  569503 out.go:177] * Verifying Kubernetes components...
	I0127 12:44:27.780238  569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:44:27.811588  569503 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:44:27.813011  569503 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:44:27.813035  569503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:44:27.813115  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:27.819046  569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:44:27.819717  569503 addons.go:238] Setting addon default-storageclass=true in "offline-docker-649313"
	I0127 12:44:27.819754  569503 host.go:66] Checking if "offline-docker-649313" exists ...
	I0127 12:44:27.820089  569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
	I0127 12:44:27.836922  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:27.845939  569503 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:44:27.845966  569503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:44:27.846026  569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
	I0127 12:44:27.863948  569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
	I0127 12:44:27.919767  569503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:44:27.985260  569503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:44:27.985600  569503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:44:27.993994  569503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:44:28.470697  569503 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 12:44:28.471817  569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0127 12:44:28.722116  569503 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "offline-docker-649313" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0127 12:44:28.722150  569503 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0127 12:44:28.911477  569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:44:28.911771  569503 node_ready.go:35] waiting up to 6m0s for node "offline-docker-649313" to be "Ready" ...
	I0127 12:44:28.912085  569503 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 12:44:28.912115  569503 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 12:44:28.915967  569503 node_ready.go:49] node "offline-docker-649313" has status "Ready":"True"
	I0127 12:44:28.915992  569503 node_ready.go:38] duration metric: took 4.186192ms for node "offline-docker-649313" to be "Ready" ...
	I0127 12:44:28.916006  569503 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:44:28.923003  569503 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:44:28.924514  569503 addons.go:514] duration metric: took 1.147359304s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:44:28.925005  569503 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace to be "Ready" ...
	I0127 12:44:30.931210  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:32.932004  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:35.431086  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:37.931014  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:40.432114  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:42.434906  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:44.931100  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:46.931843  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:49.431763  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:51.930608  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:53.930751  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:55.931544  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:44:58.432234  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:45:00.931364  569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
	I0127 12:45:02.431948  569503 pod_ready.go:93] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.431984  569503 pod_ready.go:82] duration metric: took 33.506958094s for pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.432003  569503 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.436910  569503 pod_ready.go:93] pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.436933  569503 pod_ready.go:82] duration metric: took 4.921663ms for pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.436944  569503 pod_ready.go:79] waiting up to 6m0s for pod "etcd-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.441105  569503 pod_ready.go:93] pod "etcd-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.441135  569503 pod_ready.go:82] duration metric: took 4.182745ms for pod "etcd-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.441148  569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.446059  569503 pod_ready.go:93] pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.446087  569503 pod_ready.go:82] duration metric: took 4.928806ms for pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.446101  569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.450024  569503 pod_ready.go:93] pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.450088  569503 pod_ready.go:82] duration metric: took 3.977861ms for pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.450114  569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nwtdt" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.829725  569503 pod_ready.go:93] pod "kube-proxy-nwtdt" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:02.829760  569503 pod_ready.go:82] duration metric: took 379.627424ms for pod "kube-proxy-nwtdt" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:02.829775  569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:03.228873  569503 pod_ready.go:93] pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
	I0127 12:45:03.228896  569503 pod_ready.go:82] duration metric: took 399.11306ms for pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
	I0127 12:45:03.228909  569503 pod_ready.go:39] duration metric: took 34.312891738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:45:03.228926  569503 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:45:03.228977  569503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:45:03.240680  569503 api_server.go:72] duration metric: took 35.46381462s to wait for apiserver process to appear ...
	I0127 12:45:03.240707  569503 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:45:03.240732  569503 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 12:45:03.244842  569503 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 12:45:03.245735  569503 api_server.go:141] control plane version: v1.32.1
	I0127 12:45:03.245759  569503 api_server.go:131] duration metric: took 5.045728ms to wait for apiserver health ...
	I0127 12:45:03.245768  569503 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:45:03.431863  569503 system_pods.go:59] 8 kube-system pods found
	I0127 12:45:03.431895  569503 system_pods.go:61] "coredns-668d6bf9bc-6nkx4" [44bc4f70-dd40-4791-864c-0458af6a5fe8] Running
	I0127 12:45:03.431900  569503 system_pods.go:61] "coredns-668d6bf9bc-7rv77" [d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6] Running
	I0127 12:45:03.431903  569503 system_pods.go:61] "etcd-offline-docker-649313" [2d38e1ea-f32a-48e3-a76a-3f528870d44f] Running
	I0127 12:45:03.431907  569503 system_pods.go:61] "kube-apiserver-offline-docker-649313" [bbc9118e-6f26-4183-9206-d53c19d12309] Running
	I0127 12:45:03.431911  569503 system_pods.go:61] "kube-controller-manager-offline-docker-649313" [58a91e3e-1ab9-4516-82f0-63c2d864c1ee] Running
	I0127 12:45:03.431913  569503 system_pods.go:61] "kube-proxy-nwtdt" [a2371845-b951-4e52-9c2a-01a394a9b403] Running
	I0127 12:45:03.431916  569503 system_pods.go:61] "kube-scheduler-offline-docker-649313" [5785bcf7-128b-48bd-aaf2-42bdb490bdb7] Running
	I0127 12:45:03.431919  569503 system_pods.go:61] "storage-provisioner" [56cf6fce-41be-4b78-9a32-86e8e902d97c] Running
	I0127 12:45:03.431925  569503 system_pods.go:74] duration metric: took 186.15091ms to wait for pod list to return data ...
	I0127 12:45:03.431933  569503 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:45:03.629572  569503 default_sa.go:45] found service account: "default"
	I0127 12:45:03.629608  569503 default_sa.go:55] duration metric: took 197.667178ms for default service account to be created ...
	I0127 12:45:03.629621  569503 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:45:03.832085  569503 system_pods.go:87] 8 kube-system pods found

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker failed: signal: killed
panic.go:629: *** TestOffline FAILED at 2025-01-27 12:58:50.239414494 +0000 UTC m=+3021.326846654
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-649313
I0127 12:58:50.249825  311307 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
helpers_test.go:235: (dbg) docker inspect offline-docker-649313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec",
	        "Created": "2025-01-27T12:44:00.495468028Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 570991,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T12:44:00.629049387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/hostname",
	        "HostsPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/hosts",
	        "LogPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec-json.log",
	        "Name": "/offline-docker-649313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "offline-docker-649313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "offline-docker-649313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271-init/diff:/var/lib/docker/overlay2/d46080dabfd09e849513ff8da7d233565f9a821ed6a2597f6c352e21817feda4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "offline-docker-649313",
	                "Source": "/var/lib/docker/volumes/offline-docker-649313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "offline-docker-649313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "offline-docker-649313",
	                "name.minikube.sigs.k8s.io": "offline-docker-649313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "258c70e2e1b1e9ff20bdf397aaebcc41e12c4bfa092616a709da0b58ba7e207e",
	            "SandboxKey": "/var/run/docker/netns/258c70e2e1b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "offline-docker-649313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "85fc0d82ab8d81d401616a015fd721eade8335a1d19dad0d50d1f59cc93fc120",
	                    "EndpointID": "7b3011dbfb414aafc45bde98ea2fe15d11a70c649bfce95abe0d4c36a18fc7dd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "offline-docker-649313",
	                        "60587125548a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p offline-docker-649313 -n offline-docker-649313
helpers_test.go:244: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestOffline]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p offline-docker-649313 logs -n 25
helpers_test.go:252: TestOffline logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args               |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-244099 pgrep  | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | -a kubelet                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | cat /etc/nsswitch.conf          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | cat /etc/hosts                  |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | cat /etc/resolv.conf            |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | crictl pods                     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | crictl ps --all                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | find /etc/cni -type f -exec sh  |                       |         |         |                     |                     |
	|         | -c 'echo {}; cat {}' \;         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | ip a s                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | ip r s                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | iptables-save                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | iptables -t nat -L -n -v        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | cat /run/flannel/subnet.env     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC |                     |
	|         | sudo cat                        |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | systemctl status kubelet --all  |                       |         |         |                     |                     |
	|         | --full --no-pager               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | sudo systemctl cat kubelet      |                       |         |         |                     |                     |
	|         | --no-pager                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | journalctl -xeu kubelet --all   |                       |         |         |                     |                     |
	|         | --full --no-pager               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | sudo cat                        |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | sudo cat                        |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | systemctl status docker --all   |                       |         |         |                     |                     |
	|         | --full --no-pager               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | sudo systemctl cat docker       |                       |         |         |                     |                     |
	|         | --no-pager                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | cat /etc/docker/daemon.json     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | docker system info              |                       |         |         |                     |                     |
	| ssh     | -p false-244099 pgrep -a        | false-244099          | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | kubelet                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099 sudo   | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | systemctl status cri-docker     |                       |         |         |                     |                     |
	|         | --all --full --no-pager         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-244099        | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
	|         | sudo systemctl cat cri-docker   |                       |         |         |                     |                     |
	|         | --no-pager                      |                       |         |         |                     |                     |
	|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:58:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:58:12.373600  750031 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:58:12.373934  750031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:58:12.373947  750031 out.go:358] Setting ErrFile to fd 2...
	I0127 12:58:12.373954  750031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:58:12.374171  750031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:58:12.375009  750031 out.go:352] Setting JSON to false
	I0127 12:58:12.376646  750031 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31239,"bootTime":1737951453,"procs":475,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:58:12.376755  750031 start.go:139] virtualization: kvm guest
	I0127 12:58:12.379012  750031 out.go:177] * [false-244099] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:58:12.380343  750031 notify.go:220] Checking for updates...
	I0127 12:58:12.380416  750031 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:58:12.381639  750031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:58:12.383050  750031 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:58:12.384409  750031 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:58:12.385817  750031 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:58:12.387085  750031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:58:12.388874  750031 config.go:182] Loaded profile config "custom-flannel-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:58:12.389083  750031 config.go:182] Loaded profile config "default-k8s-diff-port-359066": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:58:12.389286  750031 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:58:12.389402  750031 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:58:12.418652  750031 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:58:12.418743  750031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:58:12.479252  750031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-27 12:58:12.467704931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:58:12.479399  750031 docker.go:318] overlay module found
	I0127 12:58:12.482107  750031 out.go:177] * Using the docker driver based on user configuration
	I0127 12:58:12.483494  750031 start.go:297] selected driver: docker
	I0127 12:58:12.483513  750031 start.go:901] validating driver "docker" against <nil>
	I0127 12:58:12.483527  750031 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:58:12.484727  750031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:58:12.549249  750031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-27 12:58:12.540403252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:58:12.549550  750031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:58:12.549877  750031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:58:12.551866  750031 out.go:177] * Using Docker driver with root privileges
	I0127 12:58:12.553239  750031 cni.go:84] Creating CNI manager for "false"
	I0127 12:58:12.553328  750031 start.go:340] cluster config:
	{Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 12:58:12.554773  750031 out.go:177] * Starting "false-244099" primary control-plane node in "false-244099" cluster
	I0127 12:58:12.556066  750031 cache.go:121] Beginning downloading kic base image for docker with docker
	I0127 12:58:12.557346  750031 out.go:177] * Pulling base image v0.0.46 ...
	I0127 12:58:12.558687  750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:58:12.558741  750031 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 12:58:12.558755  750031 cache.go:56] Caching tarball of preloaded images
	I0127 12:58:12.558790  750031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:58:12.558865  750031 preload.go:172] Found /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:58:12.558883  750031 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:58:12.559002  750031 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json ...
	I0127 12:58:12.559026  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json: {Name:mkd8d862cb70d3b3e09f1f416894d1cde8bc47e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:12.586094  750031 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 12:58:12.586129  750031 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 12:58:12.586153  750031 cache.go:227] Successfully downloaded all kic artifacts
	I0127 12:58:12.586195  750031 start.go:360] acquireMachinesLock for false-244099: {Name:mkb9db4e1e07c88c0876893047ca693eae187ed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:58:12.586331  750031 start.go:364] duration metric: took 112.781µs to acquireMachinesLock for "false-244099"
	I0127 12:58:12.586365  750031 start.go:93] Provisioning new machine with config: &{Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:58:12.586474  750031 start.go:125] createHost starting for "" (driver="docker")
	I0127 12:58:10.496317  740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:12.993460  740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:12.186407  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:14.687871  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:12.589384  750031 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0127 12:58:12.589712  750031 start.go:159] libmachine.API.Create for "false-244099" (driver="docker")
	I0127 12:58:12.589759  750031 client.go:168] LocalClient.Create starting
	I0127 12:58:12.589849  750031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem
	I0127 12:58:12.589901  750031 main.go:141] libmachine: Decoding PEM data...
	I0127 12:58:12.589918  750031 main.go:141] libmachine: Parsing certificate...
	I0127 12:58:12.589985  750031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem
	I0127 12:58:12.590015  750031 main.go:141] libmachine: Decoding PEM data...
	I0127 12:58:12.590031  750031 main.go:141] libmachine: Parsing certificate...
	I0127 12:58:12.590499  750031 cli_runner.go:164] Run: docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 12:58:12.613828  750031 cli_runner.go:211] docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 12:58:12.613934  750031 network_create.go:284] running [docker network inspect false-244099] to gather additional debugging logs...
	I0127 12:58:12.613967  750031 cli_runner.go:164] Run: docker network inspect false-244099
	W0127 12:58:12.638796  750031 cli_runner.go:211] docker network inspect false-244099 returned with exit code 1
	I0127 12:58:12.638834  750031 network_create.go:287] error running [docker network inspect false-244099]: docker network inspect false-244099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network false-244099 not found
	I0127 12:58:12.638850  750031 network_create.go:289] output of [docker network inspect false-244099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network false-244099 not found
	
	** /stderr **
	I0127 12:58:12.638985  750031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:58:12.658007  750031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a67733940b1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:47:92:de:9e} reservation:<nil>}
	I0127 12:58:12.659136  750031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-526e8be49203 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:00:a4:5e:8f} reservation:<nil>}
	I0127 12:58:12.660548  750031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1505344accd1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ee:63:1d:4f} reservation:<nil>}
	I0127 12:58:12.661602  750031 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-85fc0d82ab8d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:d8:f9:c7:10} reservation:<nil>}
	I0127 12:58:12.662532  750031 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d72310}
	I0127 12:58:12.662561  750031 network_create.go:124] attempt to create docker network false-244099 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0127 12:58:12.662605  750031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-244099 false-244099
	I0127 12:58:12.749517  750031 network_create.go:108] docker network false-244099 192.168.85.0/24 created
	I0127 12:58:12.749558  750031 kic.go:121] calculated static IP "192.168.85.2" for the "false-244099" container
	I0127 12:58:12.749619  750031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 12:58:12.770355  750031 cli_runner.go:164] Run: docker volume create false-244099 --label name.minikube.sigs.k8s.io=false-244099 --label created_by.minikube.sigs.k8s.io=true
	I0127 12:58:12.794239  750031 oci.go:103] Successfully created a docker volume false-244099
	I0127 12:58:12.794348  750031 cli_runner.go:164] Run: docker run --rm --name false-244099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-244099 --entrypoint /usr/bin/test -v false-244099:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 12:58:13.416927  750031 oci.go:107] Successfully prepared a docker volume false-244099
	I0127 12:58:13.416983  750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:58:13.417016  750031 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 12:58:13.417119  750031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-244099:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 12:58:15.486625  740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:17.986271  740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:17.187156  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:19.687250  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:19.211341  750031 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-244099:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.794179028s)
	I0127 12:58:19.211387  750031 kic.go:203] duration metric: took 5.794367665s to extract preloaded images to volume ...
	W0127 12:58:19.211522  750031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 12:58:19.211657  750031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 12:58:19.276256  750031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-244099 --name false-244099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-244099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-244099 --network false-244099 --ip 192.168.85.2 --volume false-244099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 12:58:19.718438  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Running}}
	I0127 12:58:19.735808  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:19.757072  750031 cli_runner.go:164] Run: docker exec false-244099 stat /var/lib/dpkg/alternatives/iptables
	I0127 12:58:19.801509  750031 oci.go:144] the created container "false-244099" has a running status.
	I0127 12:58:19.801555  750031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa...
	I0127 12:58:20.478786  750031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 12:58:20.507101  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:20.526419  750031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 12:58:20.526447  750031 kic_runner.go:114] Args: [docker exec --privileged false-244099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 12:58:20.567246  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:20.585964  750031 machine.go:93] provisionDockerMachine start ...
	I0127 12:58:20.586077  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:20.604009  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:20.604288  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:20.604306  750031 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:58:20.731680  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: false-244099
	
	I0127 12:58:20.731710  750031 ubuntu.go:169] provisioning hostname "false-244099"
	I0127 12:58:20.731772  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:20.751635  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:20.751836  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:20.751851  750031 main.go:141] libmachine: About to run SSH command:
	sudo hostname false-244099 && echo "false-244099" | sudo tee /etc/hostname
	I0127 12:58:20.897005  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: false-244099
	
	I0127 12:58:20.897088  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:20.914699  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:20.914918  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:20.914944  750031 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-244099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-244099/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-244099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:58:21.048525  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:58:21.048562  750031 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-304536/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-304536/.minikube}
	I0127 12:58:21.048597  750031 ubuntu.go:177] setting up certificates
	I0127 12:58:21.048609  750031 provision.go:84] configureAuth start
	I0127 12:58:21.048679  750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
	I0127 12:58:21.066399  750031 provision.go:143] copyHostCerts
	I0127 12:58:21.066460  750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem, removing ...
	I0127 12:58:21.066469  750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem
	I0127 12:58:21.066535  750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem (1082 bytes)
	I0127 12:58:21.066622  750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem, removing ...
	I0127 12:58:21.066630  750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem
	I0127 12:58:21.066653  750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem (1123 bytes)
	I0127 12:58:21.066712  750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem, removing ...
	I0127 12:58:21.066719  750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem
	I0127 12:58:21.066739  750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem (1679 bytes)
	I0127 12:58:21.066795  750031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem org=jenkins.false-244099 san=[127.0.0.1 192.168.85.2 false-244099 localhost minikube]
	I0127 12:58:21.274244  750031 provision.go:177] copyRemoteCerts
	I0127 12:58:21.274314  750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:58:21.274352  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:21.292887  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:21.385450  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:58:21.409082  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:58:21.432720  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:58:21.454804  750031 provision.go:87] duration metric: took 406.174669ms to configureAuth
	I0127 12:58:21.454835  750031 ubuntu.go:193] setting minikube options for container-runtime
	I0127 12:58:21.455033  750031 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:58:21.455095  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:21.473360  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:21.473634  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:21.473653  750031 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:58:21.604777  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 12:58:21.604813  750031 ubuntu.go:71] root file system type: overlay
	I0127 12:58:21.604956  750031 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:58:21.605028  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:21.623177  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:21.623384  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:21.623468  750031 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:58:21.763840  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:58:21.763920  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:21.782240  750031 main.go:141] libmachine: Using SSH client type: native
	I0127 12:58:21.782464  750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I0127 12:58:21.782494  750031 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:58:20.487145  740231 pod_ready.go:93] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.487168  740231 pod_ready.go:82] duration metric: took 16.0070903s for pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.487178  740231 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.492331  740231 pod_ready.go:93] pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.492354  740231 pod_ready.go:82] duration metric: took 5.169004ms for pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.492365  740231 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.497019  740231 pod_ready.go:93] pod "etcd-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.497051  740231 pod_ready.go:82] duration metric: took 4.679605ms for pod "etcd-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.497062  740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.504482  740231 pod_ready.go:93] pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.504507  740231 pod_ready.go:82] duration metric: took 7.43749ms for pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.504517  740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.509056  740231 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.509079  740231 pod_ready.go:82] duration metric: took 4.554501ms for pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.509092  740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-h9g74" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.884757  740231 pod_ready.go:93] pod "kube-proxy-h9g74" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:20.884783  740231 pod_ready.go:82] duration metric: took 375.682953ms for pod "kube-proxy-h9g74" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:20.884793  740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:21.284670  740231 pod_ready.go:93] pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:21.284702  740231 pod_ready.go:82] duration metric: took 399.899646ms for pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:21.284719  740231 pod_ready.go:39] duration metric: took 16.816187396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:58:21.284745  740231 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:58:21.284800  740231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:58:21.297756  740231 api_server.go:72] duration metric: took 18.507036106s to wait for apiserver process to appear ...
	I0127 12:58:21.297782  740231 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:58:21.297803  740231 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0127 12:58:21.302222  740231 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0127 12:58:21.303269  740231 api_server.go:141] control plane version: v1.32.1
	I0127 12:58:21.303298  740231 api_server.go:131] duration metric: took 5.507067ms to wait for apiserver health ...
	I0127 12:58:21.303309  740231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:58:21.488053  740231 system_pods.go:59] 8 kube-system pods found
	I0127 12:58:21.488089  740231 system_pods.go:61] "coredns-668d6bf9bc-dcvd8" [0ecb09bd-1300-40f5-a0f2-fc8ce9b0d72d] Running
	I0127 12:58:21.488097  740231 system_pods.go:61] "coredns-668d6bf9bc-zwmtv" [debe5abf-11de-47ae-b7c8-4ef1e4c466c8] Running
	I0127 12:58:21.488102  740231 system_pods.go:61] "etcd-custom-flannel-244099" [96e5ba57-556b-487d-9556-7f3bcf498077] Running
	I0127 12:58:21.488108  740231 system_pods.go:61] "kube-apiserver-custom-flannel-244099" [1d76f029-f119-4d24-8bcf-289895a4190f] Running
	I0127 12:58:21.488113  740231 system_pods.go:61] "kube-controller-manager-custom-flannel-244099" [dad5aee1-228e-413b-bddb-8c23faaa5b93] Running
	I0127 12:58:21.488118  740231 system_pods.go:61] "kube-proxy-h9g74" [a304f669-d44f-4951-9683-841515701254] Running
	I0127 12:58:21.488123  740231 system_pods.go:61] "kube-scheduler-custom-flannel-244099" [ab6ccb2f-51bd-4e26-9797-05fc429b8cfb] Running
	I0127 12:58:21.488132  740231 system_pods.go:61] "storage-provisioner" [8a9c05d3-3553-4b7d-99cd-d2b26c5f479b] Running
	I0127 12:58:21.488139  740231 system_pods.go:74] duration metric: took 184.823334ms to wait for pod list to return data ...
	I0127 12:58:21.488151  740231 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:58:21.684822  740231 default_sa.go:45] found service account: "default"
	I0127 12:58:21.684859  740231 default_sa.go:55] duration metric: took 196.697507ms for default service account to be created ...
	I0127 12:58:21.684870  740231 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:58:21.886093  740231 system_pods.go:87] 8 kube-system pods found
	I0127 12:58:22.498456  750031 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-12-17 15:44:19.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-01-27 12:58:21.759601475 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0127 12:58:22.498494  750031 machine.go:96] duration metric: took 1.912508753s to provisionDockerMachine
	I0127 12:58:22.498510  750031 client.go:171] duration metric: took 9.908736816s to LocalClient.Create
	I0127 12:58:22.498533  750031 start.go:167] duration metric: took 9.908823652s to libmachine.API.Create "false-244099"
	I0127 12:58:22.498556  750031 start.go:293] postStartSetup for "false-244099" (driver="docker")
	I0127 12:58:22.498572  750031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:58:22.498638  750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:58:22.498681  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:22.515870  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:22.609480  750031 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:58:22.612645  750031 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 12:58:22.612679  750031 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 12:58:22.612690  750031 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 12:58:22.612697  750031 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 12:58:22.612707  750031 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/addons for local assets ...
	I0127 12:58:22.612753  750031 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/files for local assets ...
	I0127 12:58:22.612841  750031 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem -> 3113072.pem in /etc/ssl/certs
	I0127 12:58:22.612942  750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:58:22.621039  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /etc/ssl/certs/3113072.pem (1708 bytes)
	I0127 12:58:22.643937  750031 start.go:296] duration metric: took 145.364237ms for postStartSetup
	I0127 12:58:22.644297  750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
	I0127 12:58:22.662219  750031 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json ...
	I0127 12:58:22.662583  750031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:58:22.662644  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:22.680108  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:22.773413  750031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 12:58:22.778139  750031 start.go:128] duration metric: took 10.191645506s to createHost
	I0127 12:58:22.778170  750031 start.go:83] releasing machines lock for "false-244099", held for 10.191826053s
	I0127 12:58:22.778240  750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
	I0127 12:58:22.795411  750031 ssh_runner.go:195] Run: cat /version.json
	I0127 12:58:22.795481  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:22.795496  750031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:58:22.795576  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:22.816131  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:22.816965  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:22.982023  750031 ssh_runner.go:195] Run: systemctl --version
	I0127 12:58:22.987029  750031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:58:22.991988  750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 12:58:23.015716  750031 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 12:58:23.015787  750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 12:58:23.032993  750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 12:58:23.048931  750031 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:58:23.048965  750031 start.go:495] detecting cgroup driver to use...
	I0127 12:58:23.049003  750031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:58:23.049132  750031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:58:23.064383  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:58:23.073883  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:58:23.083270  750031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:58:23.083332  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:58:23.092463  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:58:23.101771  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:58:23.110798  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:58:23.119422  750031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:58:23.128006  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:58:23.136736  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:58:23.145774  750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:58:23.155256  750031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:58:23.164790  750031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:58:23.174073  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:23.261825  750031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:58:23.355966  750031 start.go:495] detecting cgroup driver to use...
	I0127 12:58:23.356032  750031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:58:23.356086  750031 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:58:23.367622  750031 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0127 12:58:23.367696  750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:58:23.380037  750031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:58:23.397443  750031 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:58:23.401445  750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:58:23.410898  750031 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:58:23.429094  750031 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:58:23.516773  750031 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:58:23.613156  750031 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:58:23.613307  750031 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:58:23.632098  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:23.732361  750031 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:58:24.011398  750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:58:24.022879  750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:58:24.034582  750031 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:58:24.117884  750031 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:58:24.204322  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:24.279716  750031 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:58:24.293486  750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:58:24.303933  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:24.381228  750031 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:58:24.441910  750031 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:58:24.441985  750031 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:58:24.445897  750031 start.go:563] Will wait 60s for crictl version
	I0127 12:58:24.445955  750031 ssh_runner.go:195] Run: which crictl
	I0127 12:58:24.449250  750031 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:58:24.485587  750031 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.1
	RuntimeApiVersion:  v1
	I0127 12:58:24.485659  750031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:58:24.510886  750031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:58:22.186285  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:24.685208  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:26.084734  740231 system_pods.go:105] "coredns-668d6bf9bc-dcvd8" [0ecb09bd-1300-40f5-a0f2-fc8ce9b0d72d] Running
	I0127 12:58:26.084758  740231 system_pods.go:105] "coredns-668d6bf9bc-zwmtv" [debe5abf-11de-47ae-b7c8-4ef1e4c466c8] Running
	I0127 12:58:26.084764  740231 system_pods.go:105] "etcd-custom-flannel-244099" [96e5ba57-556b-487d-9556-7f3bcf498077] Running
	I0127 12:58:26.084772  740231 system_pods.go:105] "kube-apiserver-custom-flannel-244099" [1d76f029-f119-4d24-8bcf-289895a4190f] Running
	I0127 12:58:26.084777  740231 system_pods.go:105] "kube-controller-manager-custom-flannel-244099" [dad5aee1-228e-413b-bddb-8c23faaa5b93] Running
	I0127 12:58:26.084782  740231 system_pods.go:105] "kube-proxy-h9g74" [a304f669-d44f-4951-9683-841515701254] Running
	I0127 12:58:26.084786  740231 system_pods.go:105] "kube-scheduler-custom-flannel-244099" [ab6ccb2f-51bd-4e26-9797-05fc429b8cfb] Running
	I0127 12:58:26.084795  740231 system_pods.go:105] "storage-provisioner" [8a9c05d3-3553-4b7d-99cd-d2b26c5f479b] Running
	I0127 12:58:26.084804  740231 system_pods.go:147] duration metric: took 4.399927637s to wait for k8s-apps to be running ...
	I0127 12:58:26.084814  740231 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:58:26.084869  740231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:58:26.096369  740231 system_svc.go:56] duration metric: took 11.541041ms WaitForService to wait for kubelet
	I0127 12:58:26.096403  740231 kubeadm.go:582] duration metric: took 23.305687653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:58:26.096427  740231 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:58:26.285200  740231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0127 12:58:26.285229  740231 node_conditions.go:123] node cpu capacity is 8
	I0127 12:58:26.285242  740231 node_conditions.go:105] duration metric: took 188.809537ms to run NodePressure ...
	I0127 12:58:26.285256  740231 start.go:241] waiting for startup goroutines ...
	I0127 12:58:26.285262  740231 start.go:246] waiting for cluster config update ...
	I0127 12:58:26.285273  740231 start.go:255] writing updated cluster config ...
	I0127 12:58:26.285524  740231 ssh_runner.go:195] Run: rm -f paused
	I0127 12:58:26.351226  740231 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:58:26.353171  740231 out.go:177] * Done! kubectl is now configured to use "custom-flannel-244099" cluster and "default" namespace by default
	I0127 12:58:24.536949  750031 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
	I0127 12:58:24.537056  750031 cli_runner.go:164] Run: docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:58:24.554832  750031 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0127 12:58:24.558856  750031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:58:24.570461  750031 kubeadm.go:883] updating cluster {Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:58:24.570644  750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:58:24.570720  750031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:58:24.591944  750031 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 12:58:24.591970  750031 docker.go:619] Images already preloaded, skipping extraction
	I0127 12:58:24.592040  750031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:58:24.611987  750031 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 12:58:24.612013  750031 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:58:24.612025  750031 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 docker true true} ...
	I0127 12:58:24.612135  750031 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=false-244099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false}
	I0127 12:58:24.612241  750031 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 12:58:24.659255  750031 cni.go:84] Creating CNI manager for "false"
	I0127 12:58:24.659282  750031 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:58:24.659309  750031 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-244099 NodeName:false-244099 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:58:24.659473  750031 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "false-244099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:58:24.659549  750031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:58:24.668405  750031 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:58:24.668482  750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:58:24.676948  750031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 12:58:24.695057  750031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:58:24.712046  750031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:58:24.728932  750031 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0127 12:58:24.732265  750031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:58:24.742765  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:24.819875  750031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:58:24.834922  750031 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099 for IP: 192.168.85.2
	I0127 12:58:24.834955  750031 certs.go:194] generating shared ca certs ...
	I0127 12:58:24.834976  750031 certs.go:226] acquiring lock for ca certs: {Name:mk1b16f74c226e2be2c446b7baf1d60d1399508e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:24.835154  750031 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key
	I0127 12:58:24.835208  750031 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key
	I0127 12:58:24.835221  750031 certs.go:256] generating profile certs ...
	I0127 12:58:24.835295  750031 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key
	I0127 12:58:24.835309  750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt with IP's: []
	I0127 12:58:25.013234  750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt ...
	I0127 12:58:25.013266  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt: {Name:mk544c6a47de60ea9e6a96fd2e1af83ec1cc26a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.013421  750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key ...
	I0127 12:58:25.013433  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key: {Name:mkc00ce2454a82942bdf7bf29fc5994084688abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.013514  750031 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec
	I0127 12:58:25.013530  750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0127 12:58:25.162552  750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec ...
	I0127 12:58:25.162586  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec: {Name:mkf9f58cd13379161838b1820651898ad35d112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.162732  750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec ...
	I0127 12:58:25.162745  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec: {Name:mkf35a641c1ea4b2cc8d3b70daf636c12a652f0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.162819  750031 certs.go:381] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt
	I0127 12:58:25.162890  750031 certs.go:385] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key
	I0127 12:58:25.162942  750031 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key
	I0127 12:58:25.162958  750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt with IP's: []
	I0127 12:58:25.422535  750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt ...
	I0127 12:58:25.422566  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt: {Name:mk526bb37f61cb3704a3adee539c0168555157d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.422764  750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key ...
	I0127 12:58:25.422779  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key: {Name:mke9aa09b12da50c2769bf84c9672eed2459f066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:25.423003  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem (1338 bytes)
	W0127 12:58:25.423050  750031 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307_empty.pem, impossibly tiny 0 bytes
	I0127 12:58:25.423064  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:58:25.423096  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:58:25.423129  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:58:25.423163  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem (1679 bytes)
	I0127 12:58:25.423216  750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem (1708 bytes)
	I0127 12:58:25.423868  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:58:25.448061  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:58:25.471765  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:58:25.494534  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:58:25.516538  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:58:25.537994  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:58:25.560635  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:58:25.583314  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:58:25.605393  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:58:25.627808  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem --> /usr/share/ca-certificates/311307.pem (1338 bytes)
	I0127 12:58:25.650341  750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /usr/share/ca-certificates/3113072.pem (1708 bytes)
	I0127 12:58:25.673253  750031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:58:25.691006  750031 ssh_runner.go:195] Run: openssl version
	I0127 12:58:25.696267  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/311307.pem && ln -fs /usr/share/ca-certificates/311307.pem /etc/ssl/certs/311307.pem"
	I0127 12:58:25.705145  750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/311307.pem
	I0127 12:58:25.708538  750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:14 /usr/share/ca-certificates/311307.pem
	I0127 12:58:25.708590  750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/311307.pem
	I0127 12:58:25.714871  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/311307.pem /etc/ssl/certs/51391683.0"
	I0127 12:58:25.723498  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3113072.pem && ln -fs /usr/share/ca-certificates/3113072.pem /etc/ssl/certs/3113072.pem"
	I0127 12:58:25.732043  750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3113072.pem
	I0127 12:58:25.735439  750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:14 /usr/share/ca-certificates/3113072.pem
	I0127 12:58:25.735491  750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3113072.pem
	I0127 12:58:25.742393  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3113072.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:58:25.751687  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:58:25.761320  750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:58:25.764709  750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:09 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:58:25.764768  750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:58:25.771323  750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:58:25.780490  750031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:58:25.783587  750031 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:58:25.783645  750031 kubeadm.go:392] StartCluster: {Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:58:25.783754  750031 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:58:25.802365  750031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:58:25.810991  750031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:58:25.819331  750031 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 12:58:25.819397  750031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:58:25.827909  750031 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:58:25.827932  750031 kubeadm.go:157] found existing configuration files:
	
	I0127 12:58:25.827986  750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:58:25.836213  750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:58:25.836278  750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:58:25.844083  750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:58:25.852072  750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:58:25.852138  750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:58:25.859840  750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:58:25.867738  750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:58:25.867808  750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:58:25.877362  750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:58:25.885877  750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:58:25.885947  750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:58:25.893835  750031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 12:58:25.954066  750031 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 12:58:25.954338  750031 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1074-gcp\n", err: exit status 1
	I0127 12:58:26.011081  750031 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:58:26.686195  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:28.686293  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:31.185152  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:33.686081  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:35.591620  750031 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:58:35.591675  750031 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:58:35.591751  750031 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 12:58:35.591799  750031 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1074-gcp
	I0127 12:58:35.591834  750031 kubeadm.go:310] OS: Linux
	I0127 12:58:35.591875  750031 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 12:58:35.591967  750031 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 12:58:35.592061  750031 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 12:58:35.592147  750031 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 12:58:35.592261  750031 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 12:58:35.592345  750031 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 12:58:35.592417  750031 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 12:58:35.592482  750031 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 12:58:35.592549  750031 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 12:58:35.592644  750031 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:58:35.592810  750031 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:58:35.592964  750031 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:58:35.593040  750031 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:58:35.594677  750031 out.go:235]   - Generating certificates and keys ...
	I0127 12:58:35.594758  750031 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:58:35.594820  750031 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:58:35.594904  750031 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:58:35.594968  750031 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:58:35.595026  750031 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:58:35.595077  750031 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:58:35.595123  750031 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:58:35.595218  750031 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [false-244099 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0127 12:58:35.595281  750031 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:58:35.595393  750031 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [false-244099 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0127 12:58:35.595453  750031 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:58:35.595508  750031 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:58:35.595546  750031 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:58:35.595599  750031 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:58:35.595657  750031 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:58:35.595730  750031 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:58:35.595800  750031 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:58:35.595854  750031 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:58:35.595903  750031 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:58:35.595976  750031 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:58:35.596056  750031 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:58:35.597357  750031 out.go:235]   - Booting up control plane ...
	I0127 12:58:35.597454  750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:58:35.597524  750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:58:35.597586  750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:58:35.597726  750031 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:58:35.597835  750031 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:58:35.597881  750031 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:58:35.597991  750031 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:58:35.598092  750031 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:58:35.598145  750031 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001855835s
	I0127 12:58:35.598215  750031 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:58:35.598266  750031 kubeadm.go:310] [api-check] The API server is healthy after 4.502145282s
	I0127 12:58:35.598362  750031 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:58:35.598465  750031 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:58:35.598514  750031 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:58:35.598694  750031 kubeadm.go:310] [mark-control-plane] Marking the node false-244099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:58:35.598743  750031 kubeadm.go:310] [bootstrap-token] Using token: 01hxvb.iq5wg8lj60p8tw9k
	I0127 12:58:35.600030  750031 out.go:235]   - Configuring RBAC rules ...
	I0127 12:58:35.600144  750031 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:58:35.600273  750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:58:35.600440  750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:58:35.600635  750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:58:35.600736  750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:58:35.600815  750031 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:58:35.600933  750031 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:58:35.601000  750031 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:58:35.601061  750031 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:58:35.601070  750031 kubeadm.go:310] 
	I0127 12:58:35.601148  750031 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:58:35.601162  750031 kubeadm.go:310] 
	I0127 12:58:35.601281  750031 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:58:35.601291  750031 kubeadm.go:310] 
	I0127 12:58:35.601328  750031 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:58:35.601411  750031 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:58:35.601473  750031 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:58:35.601483  750031 kubeadm.go:310] 
	I0127 12:58:35.601565  750031 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:58:35.601578  750031 kubeadm.go:310] 
	I0127 12:58:35.601649  750031 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:58:35.601659  750031 kubeadm.go:310] 
	I0127 12:58:35.601740  750031 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:58:35.601867  750031 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:58:35.601949  750031 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:58:35.601957  750031 kubeadm.go:310] 
	I0127 12:58:35.602029  750031 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:58:35.602111  750031 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:58:35.602120  750031 kubeadm.go:310] 
	I0127 12:58:35.602195  750031 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 01hxvb.iq5wg8lj60p8tw9k \
	I0127 12:58:35.602287  750031 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a \
	I0127 12:58:35.602311  750031 kubeadm.go:310] 	--control-plane 
	I0127 12:58:35.602318  750031 kubeadm.go:310] 
	I0127 12:58:35.602400  750031 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:58:35.602410  750031 kubeadm.go:310] 
	I0127 12:58:35.602492  750031 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 01hxvb.iq5wg8lj60p8tw9k \
	I0127 12:58:35.602635  750031 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a 
	I0127 12:58:35.602650  750031 cni.go:84] Creating CNI manager for "false"
	I0127 12:58:35.602690  750031 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:58:35.602733  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:35.602808  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes false-244099 minikube.k8s.io/updated_at=2025_01_27T12_58_35_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=false-244099 minikube.k8s.io/primary=true
	I0127 12:58:35.695011  750031 ops.go:34] apiserver oom_adj: -16
	I0127 12:58:35.695137  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:36.196124  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:36.695547  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:37.196313  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:37.696289  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:38.195609  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:38.695313  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:39.195362  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:39.696012  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:40.195224  750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:58:40.279233  750031 kubeadm.go:1113] duration metric: took 4.676541035s to wait for elevateKubeSystemPrivileges
	I0127 12:58:40.279276  750031 kubeadm.go:394] duration metric: took 14.495635113s to StartCluster
	I0127 12:58:40.279301  750031 settings.go:142] acquiring lock: {Name:mk55dbc0704f2f9d31c80856a45552242884623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:40.279373  750031 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:58:40.280840  750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/kubeconfig: {Name:mk59d9102d1fe380f0fe65cd8c2acffe42bba157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:58:40.281070  750031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:58:40.281074  750031 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:58:40.281151  750031 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:58:40.281253  750031 addons.go:69] Setting storage-provisioner=true in profile "false-244099"
	I0127 12:58:40.281276  750031 addons.go:238] Setting addon storage-provisioner=true in "false-244099"
	I0127 12:58:40.281298  750031 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:58:40.281315  750031 host.go:66] Checking if "false-244099" exists ...
	I0127 12:58:40.281362  750031 addons.go:69] Setting default-storageclass=true in profile "false-244099"
	I0127 12:58:40.281377  750031 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-244099"
	I0127 12:58:40.281705  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:40.281900  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:40.283565  750031 out.go:177] * Verifying Kubernetes components...
	I0127 12:58:40.284837  750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:58:40.309240  750031 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:58:36.184125  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:38.185048  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:40.185246  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:40.309747  750031 addons.go:238] Setting addon default-storageclass=true in "false-244099"
	I0127 12:58:40.309794  750031 host.go:66] Checking if "false-244099" exists ...
	I0127 12:58:40.310308  750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
	I0127 12:58:40.310704  750031 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:58:40.310723  750031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:58:40.310763  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:40.332342  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:40.336244  750031 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:58:40.336270  750031 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:58:40.336339  750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
	I0127 12:58:40.355344  750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
	I0127 12:58:40.493563  750031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:58:40.583563  750031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:58:40.595140  750031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:58:40.689195  750031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:58:41.264907  750031 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0127 12:58:41.266822  750031 node_ready.go:35] waiting up to 15m0s for node "false-244099" to be "Ready" ...
	I0127 12:58:41.281697  750031 node_ready.go:49] node "false-244099" has status "Ready":"True"
	I0127 12:58:41.281795  750031 node_ready.go:38] duration metric: took 14.938882ms for node "false-244099" to be "Ready" ...
	I0127 12:58:41.282209  750031 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:58:41.294528  750031 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:41.769586  750031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174404511s)
	I0127 12:58:41.769687  750031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.080453167s)
	I0127 12:58:41.771514  750031 kapi.go:214] "coredns" deployment in "kube-system" namespace and "false-244099" context rescaled to 1 replicas
	I0127 12:58:41.781145  750031 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:58:41.782295  750031 addons.go:514] duration metric: took 1.501154845s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:58:42.185658  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:44.685940  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:42.801265  750031 pod_ready.go:93] pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:42.801300  750031 pod_ready.go:82] duration metric: took 1.506684938s for pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:42.801322  750031 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:44.307614  750031 pod_ready.go:93] pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:44.307641  750031 pod_ready.go:82] duration metric: took 1.506312016s for pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:44.307664  750031 pod_ready.go:79] waiting up to 15m0s for pod "etcd-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:44.311740  750031 pod_ready.go:93] pod "etcd-false-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:44.311760  750031 pod_ready.go:82] duration metric: took 4.087309ms for pod "etcd-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:44.311768  750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:46.318625  750031 pod_ready.go:93] pod "kube-apiserver-false-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:46.318651  750031 pod_ready.go:82] duration metric: took 2.006874406s for pod "kube-apiserver-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:46.318662  750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:48.325340  750031 pod_ready.go:103] pod "kube-controller-manager-false-244099" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:49.323762  750031 pod_ready.go:93] pod "kube-controller-manager-false-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:49.323791  750031 pod_ready.go:82] duration metric: took 3.005116945s for pod "kube-controller-manager-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:49.323802  750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-95qsw" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:49.327986  750031 pod_ready.go:93] pod "kube-proxy-95qsw" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:49.328008  750031 pod_ready.go:82] duration metric: took 4.200296ms for pod "kube-proxy-95qsw" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:49.328018  750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:49.332000  750031 pod_ready.go:93] pod "kube-scheduler-false-244099" in "kube-system" namespace has status "Ready":"True"
	I0127 12:58:49.332023  750031 pod_ready.go:82] duration metric: took 3.99765ms for pod "kube-scheduler-false-244099" in "kube-system" namespace to be "Ready" ...
	I0127 12:58:49.332031  750031 pod_ready.go:39] duration metric: took 8.049711508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:58:49.332055  750031 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:58:49.332126  750031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:58:49.344458  750031 api_server.go:72] duration metric: took 9.063347304s to wait for apiserver process to appear ...
	I0127 12:58:49.344488  750031 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:58:49.344512  750031 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0127 12:58:49.349202  750031 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0127 12:58:49.350191  750031 api_server.go:141] control plane version: v1.32.1
	I0127 12:58:49.350215  750031 api_server.go:131] duration metric: took 5.720252ms to wait for apiserver health ...
	I0127 12:58:49.350223  750031 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:58:49.355004  750031 system_pods.go:59] 7 kube-system pods found
	I0127 12:58:49.355036  750031 system_pods.go:61] "coredns-668d6bf9bc-tns2z" [cca8ac17-e37f-4929-ba8c-a864654b2f09] Running
	I0127 12:58:49.355044  750031 system_pods.go:61] "etcd-false-244099" [5810d0f0-dda9-4cb1-a159-b7e0838dbd0d] Running
	I0127 12:58:49.355048  750031 system_pods.go:61] "kube-apiserver-false-244099" [e1d41a3f-17cf-4428-b4f0-b9da38901a34] Running
	I0127 12:58:49.355054  750031 system_pods.go:61] "kube-controller-manager-false-244099" [7eea3b17-ca32-4a4c-91be-58f7b94ed885] Running
	I0127 12:58:49.355060  750031 system_pods.go:61] "kube-proxy-95qsw" [f78299cf-1d12-4da6-a21f-e8316e43af1a] Running
	I0127 12:58:49.355065  750031 system_pods.go:61] "kube-scheduler-false-244099" [d421150b-4093-41a9-9add-4858bddf30fe] Running
	I0127 12:58:49.355072  750031 system_pods.go:61] "storage-provisioner" [9339841d-38e8-4d6b-b60e-a69edec0b104] Running
	I0127 12:58:49.355086  750031 system_pods.go:74] duration metric: took 4.856318ms to wait for pod list to return data ...
	I0127 12:58:49.355096  750031 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:58:49.357905  750031 default_sa.go:45] found service account: "default"
	I0127 12:58:49.357928  750031 default_sa.go:55] duration metric: took 2.823978ms for default service account to be created ...
	I0127 12:58:49.357938  750031 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:58:49.473041  750031 system_pods.go:87] 7 kube-system pods found
	I0127 12:58:49.671011  750031 system_pods.go:105] "coredns-668d6bf9bc-tns2z" [cca8ac17-e37f-4929-ba8c-a864654b2f09] Running
	I0127 12:58:49.671050  750031 system_pods.go:105] "etcd-false-244099" [5810d0f0-dda9-4cb1-a159-b7e0838dbd0d] Running
	I0127 12:58:49.671058  750031 system_pods.go:105] "kube-apiserver-false-244099" [e1d41a3f-17cf-4428-b4f0-b9da38901a34] Running
	I0127 12:58:49.671065  750031 system_pods.go:105] "kube-controller-manager-false-244099" [7eea3b17-ca32-4a4c-91be-58f7b94ed885] Running
	I0127 12:58:49.671071  750031 system_pods.go:105] "kube-proxy-95qsw" [f78299cf-1d12-4da6-a21f-e8316e43af1a] Running
	I0127 12:58:49.671077  750031 system_pods.go:105] "kube-scheduler-false-244099" [d421150b-4093-41a9-9add-4858bddf30fe] Running
	I0127 12:58:49.671083  750031 system_pods.go:105] "storage-provisioner" [9339841d-38e8-4d6b-b60e-a69edec0b104] Running
	I0127 12:58:49.671093  750031 system_pods.go:147] duration metric: took 313.147642ms to wait for k8s-apps to be running ...
	I0127 12:58:49.671107  750031 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:58:49.671167  750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:58:49.685122  750031 system_svc.go:56] duration metric: took 14.004286ms WaitForService to wait for kubelet
	I0127 12:58:49.685155  750031 kubeadm.go:582] duration metric: took 9.404048085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:58:49.685178  750031 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:58:49.870998  750031 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0127 12:58:49.871025  750031 node_conditions.go:123] node cpu capacity is 8
	I0127 12:58:49.871038  750031 node_conditions.go:105] duration metric: took 185.85445ms to run NodePressure ...
	I0127 12:58:49.871050  750031 start.go:241] waiting for startup goroutines ...
	I0127 12:58:49.871058  750031 start.go:246] waiting for cluster config update ...
	I0127 12:58:49.871072  750031 start.go:255] writing updated cluster config ...
	I0127 12:58:49.871364  750031 ssh_runner.go:195] Run: rm -f paused
	I0127 12:58:49.923741  750031 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:58:49.925599  750031 out.go:177] * Done! kubectl is now configured to use "false-244099" cluster and "default" namespace by default
	I0127 12:58:46.686588  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	I0127 12:58:49.185623  714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Jan 27 12:44:07 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:07.420092961Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 27 12:44:07 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:07.420272119Z" level=info msg="API listen on [::]:2376"
	Jan 27 12:44:07 offline-docker-649313 systemd[1]: Started Docker Application Container Engine.
	Jan 27 12:44:07 offline-docker-649313 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Start docker client with request timeout 0s"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Loaded network plugin cni"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 27 12:44:07 offline-docker-649313 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/274ceb1d778af04def78de0fa10867100c2effad7ac3195db386a35b283abb58/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24edc75f83d95672f93b41e3b800ff4db1ccf9f9e9934545ed3871063f654fcd/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee05ddccec7f3a0010e81a31ead44b5a551efb4dbc61388c82054141c2f0fa5d/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3d5f35fa6601d2db883f5118f222ff18f9e251801e97e2fa20b695c4289f942/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6784fe75024c94477de9c9dcddf350673b727ec274233c2977c05c966e42d22b/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d149963c0f5417a4bfd7ff76fba93e74bcbe5c8567fe8c7e92dfc73f237f629/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0e922fc5b50a5e3f2fbbdf479a25e30299c64aab3e00b7640846d5596550e0eb/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/27063302b3d83e8d66a8a55a42c513af7780289d2ef43b6a9aa3f55dce157d3f/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Jan 27 12:44:33 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jan 27 12:44:59 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:59.723655025Z" level=info msg="ignoring event" container=d47febb25c7020fe2e70988c4383aadc026bfb71145087b6df8688601199e639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c9b8b4ed10388       6e38f40d628db       13 minutes ago      Running             storage-provisioner       1                   27063302b3d83       storage-provisioner
	d47febb25c702       6e38f40d628db       14 minutes ago      Exited              storage-provisioner       0                   27063302b3d83       storage-provisioner
	77059df7fccb1       c69fa2e9cbf5f       14 minutes ago      Running             coredns                   0                   0e922fc5b50a5       coredns-668d6bf9bc-6nkx4
	ad7d572863d3f       c69fa2e9cbf5f       14 minutes ago      Running             coredns                   0                   9d149963c0f54       coredns-668d6bf9bc-7rv77
	a971d29cc752a       e29f9c7391fd9       14 minutes ago      Running             kube-proxy                0                   6784fe75024c9       kube-proxy-nwtdt
	f79d8bf90123d       a9e7e6b294baf       14 minutes ago      Running             etcd                      0                   b3d5f35fa6601       etcd-offline-docker-649313
	d4f44e36ec71d       95c0bda56fc4d       14 minutes ago      Running             kube-apiserver            0                   24edc75f83d95       kube-apiserver-offline-docker-649313
	6e1305f891a45       2b0d6572d062c       14 minutes ago      Running             kube-scheduler            0                   ee05ddccec7f3       kube-scheduler-offline-docker-649313
	1451ed12e7f6e       019ee182b58e2       14 minutes ago      Running             kube-controller-manager   0                   274ceb1d778af       kube-controller-manager-offline-docker-649313
	
	
	==> coredns [77059df7fccb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52696 - 18710 "HINFO IN 7084254789387126238.3589103866241268019. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007717295s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[976123763]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30000ms):
	Trace[976123763]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
	Trace[976123763]: [30.000823065s] [30.000823065s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1560918851]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30001ms):
	Trace[1560918851]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
	Trace[1560918851]: [30.001006858s] [30.001006858s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[441518432]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30001ms):
	Trace[441518432]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
	Trace[441518432]: [30.00109415s] [30.00109415s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ad7d572863d3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40550 - 6146 "HINFO IN 1544229250248001749.8501780058627845564. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009925252s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[228167203]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30000ms):
	Trace[228167203]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
	Trace[228167203]: [30.000886848s] [30.000886848s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[486211484]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30000ms):
	Trace[486211484]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
	Trace[486211484]: [30.000853603s] [30.000853603s] END
	[INFO] plugin/kubernetes: Trace[1023521096]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30001ms):
	Trace[1023521096]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
	Trace[1023521096]: [30.001046101s] [30.001046101s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               offline-docker-649313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=offline-docker-649313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=offline-docker-649313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_44_23_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:44:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  offline-docker-649313
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:58:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:53:53 +0000   Mon, 27 Jan 2025 12:44:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:53:53 +0000   Mon, 27 Jan 2025 12:44:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:53:53 +0000   Mon, 27 Jan 2025 12:44:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:53:53 +0000   Mon, 27 Jan 2025 12:44:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    offline-docker-649313
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2e68375a36b4ab39e70e74b0bae1ce9
	  System UUID:                fdbaac16-a4b6-4b1a-ad65-83886decab7b
	  Boot ID:                    bc9990d9-5982-4f92-9b4e-1af016df98ed
	  Kernel Version:             5.15.0-1074-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-6nkx4                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-668d6bf9bc-7rv77                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-offline-docker-649313                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kube-apiserver-offline-docker-649313             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-offline-docker-649313    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nwtdt                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-offline-docker-649313             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             240Mi (0%)  340Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node offline-docker-649313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node offline-docker-649313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node offline-docker-649313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node offline-docker-649313 event: Registered Node offline-docker-649313 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 5e f7 2b c2 4c 08 06
	[Jan27 12:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 4c 3d ea f8 b4 08 06
	[  +0.000920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 62 2f 71 d4 2d 08 06
	[ +23.137502] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f7 41 88 91 9f 08 06
	[ +24.627716] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 f8 fc 76 b3 98 08 06
	[  +0.000571] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 4c 3d ea f8 b4 08 06
	[Jan27 12:58] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
	[  +0.182785] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
	[  +0.020535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2e 6d 1f ff 91 08 06
	[ +17.269528] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 d8 f3 ff 38 52 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
	[  +4.619340] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 92 11 b9 6f 02 08 06
	[  +0.088249] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 03 34 a3 fd 56 08 06
	
	
	==> etcd [f79d8bf90123] <==
	{"level":"info","ts":"2025-01-27T12:44:18.585447Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:44:18.585620Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:44:18.585693Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:44:28.214313Z","caller":"traceutil/trace.go:171","msg":"trace[1878519114] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"116.223905ms","start":"2025-01-27T12:44:28.098062Z","end":"2025-01-27T12:44:28.214286Z","steps":["trace[1878519114] 'process raft request'  (duration: 108.816283ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.434245Z","caller":"traceutil/trace.go:171","msg":"trace[50517336] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"131.765057ms","start":"2025-01-27T12:44:28.302462Z","end":"2025-01-27T12:44:28.434227Z","steps":["trace[50517336] 'process raft request'  (duration: 131.700567ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.434262Z","caller":"traceutil/trace.go:171","msg":"trace[136072165] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"132.443082ms","start":"2025-01-27T12:44:28.301794Z","end":"2025-01-27T12:44:28.434237Z","steps":["trace[136072165] 'process raft request'  (duration: 132.313199ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.434387Z","caller":"traceutil/trace.go:171","msg":"trace[1485636036] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"131.691457ms","start":"2025-01-27T12:44:28.302689Z","end":"2025-01-27T12:44:28.434380Z","steps":["trace[1485636036] 'process raft request'  (duration: 131.49926ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.434277Z","caller":"traceutil/trace.go:171","msg":"trace[2136396416] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"153.967432ms","start":"2025-01-27T12:44:28.280294Z","end":"2025-01-27T12:44:28.434262Z","steps":["trace[2136396416] 'process raft request'  (duration: 84.276722ms)","trace[2136396416] 'compare'  (duration: 69.388116ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:44:28.596528Z","caller":"traceutil/trace.go:171","msg":"trace[1600605280] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"131.531049ms","start":"2025-01-27T12:44:28.464967Z","end":"2025-01-27T12:44:28.596498Z","steps":["trace[1600605280] 'process raft request'  (duration: 102.250638ms)","trace[1600605280] 'compare'  (duration: 29.032642ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:44:28.596575Z","caller":"traceutil/trace.go:171","msg":"trace[1899738883] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"131.164682ms","start":"2025-01-27T12:44:28.465374Z","end":"2025-01-27T12:44:28.596539Z","steps":["trace[1899738883] 'process raft request'  (duration: 131.096097ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.596540Z","caller":"traceutil/trace.go:171","msg":"trace[1940056429] linearizableReadLoop","detail":"{readStateIndex:339; appliedIndex:337; }","duration":"131.368951ms","start":"2025-01-27T12:44:28.465147Z","end":"2025-01-27T12:44:28.596516Z","steps":["trace[1940056429] 'read index received'  (duration: 46.521085ms)","trace[1940056429] 'applied index is now lower than readState.Index'  (duration: 84.846971ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:44:28.596622Z","caller":"traceutil/trace.go:171","msg":"trace[460231049] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"131.348244ms","start":"2025-01-27T12:44:28.465257Z","end":"2025-01-27T12:44:28.596605Z","steps":["trace[460231049] 'process raft request'  (duration: 131.16131ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.596690Z","caller":"traceutil/trace.go:171","msg":"trace[903453467] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"131.064057ms","start":"2025-01-27T12:44:28.465612Z","end":"2025-01-27T12:44:28.596676Z","steps":["trace[903453467] 'process raft request'  (duration: 130.877363ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:44:28.596747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.539035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-6nkx4\" limit:1 ","response":"range_response_count:1 size:3579"}
	{"level":"info","ts":"2025-01-27T12:44:28.596806Z","caller":"traceutil/trace.go:171","msg":"trace[964555452] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-6nkx4; range_end:; response_count:1; response_revision:329; }","duration":"131.672294ms","start":"2025-01-27T12:44:28.465124Z","end":"2025-01-27T12:44:28.596796Z","steps":["trace[964555452] 'agreement among raft nodes before linearized reading'  (duration: 131.44818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:44:28.598768Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.433672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:3995"}
	{"level":"info","ts":"2025-01-27T12:44:28.598837Z","caller":"traceutil/trace.go:171","msg":"trace[781463642] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:330; }","duration":"119.530249ms","start":"2025-01-27T12:44:28.479290Z","end":"2025-01-27T12:44:28.598821Z","steps":["trace[781463642] 'agreement among raft nodes before linearized reading'  (duration: 119.366463ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.719582Z","caller":"traceutil/trace.go:171","msg":"trace[1109424790] transaction","detail":"{read_only:false; number_of_response:1; response_revision:332; }","duration":"113.530217ms","start":"2025-01-27T12:44:28.606024Z","end":"2025-01-27T12:44:28.719554Z","steps":["trace[1109424790] 'process raft request'  (duration: 113.433367ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:44:28.719661Z","caller":"traceutil/trace.go:171","msg":"trace[774799129] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"113.75536ms","start":"2025-01-27T12:44:28.605889Z","end":"2025-01-27T12:44:28.719645Z","steps":["trace[774799129] 'process raft request'  (duration: 95.922896ms)","trace[774799129] 'compare'  (duration: 17.552421ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:44:28.719606Z","caller":"traceutil/trace.go:171","msg":"trace[175832720] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"109.756561ms","start":"2025-01-27T12:44:28.609829Z","end":"2025-01-27T12:44:28.719586Z","steps":["trace[175832720] 'process raft request'  (duration: 109.653237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:44:37.258390Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.688655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638350204469583054 > lease_revoke:<id:590694a7ca911ae7>","response":"size:28"}
	{"level":"info","ts":"2025-01-27T12:49:18.447257Z","caller":"traceutil/trace.go:171","msg":"trace[213988150] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"157.866585ms","start":"2025-01-27T12:49:18.289353Z","end":"2025-01-27T12:49:18.447220Z","steps":["trace[213988150] 'process raft request'  (duration: 91.011989ms)","trace[213988150] 'compare'  (duration: 66.659013ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:54:19.086862Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":618}
	{"level":"info","ts":"2025-01-27T12:54:19.091406Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":618,"took":"4.286588ms","hash":686489464,"current-db-size-bytes":1863680,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1863680,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-01-27T12:54:19.091440Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":686489464,"revision":618,"compact-revision":-1}
	
	
	==> kernel <==
	 12:58:51 up  8:41,  0 users,  load average: 3.87, 3.32, 2.72
	Linux offline-docker-649313 5.15.0-1074-gcp #83~20.04.1-Ubuntu SMP Wed Dec 18 20:42:35 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d4f44e36ec71] <==
	I0127 12:44:20.577708       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:44:20.577734       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:44:20.577768       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:44:20.577776       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:44:20.577782       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:44:20.577796       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:44:20.577789       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:44:20.577929       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:44:20.577873       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:44:20.780352       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:44:21.440853       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 12:44:21.445373       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 12:44:21.445395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:44:21.888800       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:44:21.926450       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:44:21.984789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 12:44:21.990224       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 12:44:21.991284       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:44:21.996268       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:44:22.499447       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:44:22.982869       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:44:22.991528       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 12:44:22.998930       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:44:27.098779       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:44:27.749857       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [1451ed12e7f6] <==
	I0127 12:44:27.047624       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:44:27.048101       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:44:27.048192       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:44:27.050059       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:44:27.051489       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:44:27.051522       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:44:27.051625       1 shared_informer.go:320] Caches are synced for job
	I0127 12:44:27.059497       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:44:27.061702       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:44:27.069038       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:44:27.991165       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
	I0127 12:44:28.513386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.411166073s"
	I0127 12:44:28.600348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.687529ms"
	I0127 12:44:28.600469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.541µs"
	I0127 12:44:28.720923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="68.806µs"
	I0127 12:44:28.761584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.546µs"
	I0127 12:44:30.328680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.236µs"
	I0127 12:44:30.370093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.587µs"
	I0127 12:44:33.285369       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
	I0127 12:45:02.369366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.38088ms"
	I0127 12:45:02.369508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.998µs"
	I0127 12:45:02.391337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="8.240792ms"
	I0127 12:45:02.391443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.63µs"
	I0127 12:48:48.723448       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
	I0127 12:53:53.779383       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
	
	
	==> kube-proxy [a971d29cc752] <==
	I0127 12:44:29.375739       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:44:29.596702       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0127 12:44:29.596773       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:44:29.623689       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0127 12:44:29.623771       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:44:29.626176       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:44:29.626677       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:44:29.626704       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:44:29.629191       1 config.go:199] "Starting service config controller"
	I0127 12:44:29.629222       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:44:29.629250       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:44:29.629253       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:44:29.629776       1 config.go:329] "Starting node config controller"
	I0127 12:44:29.629785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:44:29.729881       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:44:29.729899       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:44:29.729935       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6e1305f891a4] <==
	W0127 12:44:20.580428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:44:20.582824       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:20.580506       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:44:20.582877       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:20.580575       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:44:20.582931       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:20.580119       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:44:20.582952       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:20.581876       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:44:20.582972       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:20.582237       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:44:20.582990       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.389607       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:44:21.389663       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.439086       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:44:21.439229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.473812       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:44:21.473859       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.580813       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:44:21.580859       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.678060       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:44:21.678115       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:44:21.719523       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:44:21.719573       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:44:22.221886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864388    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2371845-b951-4e52-9c2a-01a394a9b403-xtables-lock\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864444    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qlx\" (UniqueName: \"kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864481    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2371845-b951-4e52-9c2a-01a394a9b403-lib-modules\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993172    2511 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993273    2511 projected.go:194] Error preparing data for projected volume kube-api-access-46qlx for pod kube-system/kube-proxy-nwtdt: configmap "kube-root-ca.crt" not found
	Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993397    2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx podName:a2371845-b951-4e52-9c2a-01a394a9b403 nodeName:}" failed. No retries permitted until 2025-01-27 12:44:28.49336454 +0000 UTC m=+5.768326501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46qlx" (UniqueName: "kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx") pod "kube-proxy-nwtdt" (UID: "a2371845-b951-4e52-9c2a-01a394a9b403") : configmap "kube-root-ca.crt" not found
	Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.467623    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzfzj\" (UniqueName: \"kubernetes.io/projected/44bc4f70-dd40-4791-864c-0458af6a5fe8-kube-api-access-dzfzj\") pod \"coredns-668d6bf9bc-6nkx4\" (UID: \"44bc4f70-dd40-4791-864c-0458af6a5fe8\") " pod="kube-system/coredns-668d6bf9bc-6nkx4"
	Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.467693    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44bc4f70-dd40-4791-864c-0458af6a5fe8-config-volume\") pod \"coredns-668d6bf9bc-6nkx4\" (UID: \"44bc4f70-dd40-4791-864c-0458af6a5fe8\") " pod="kube-system/coredns-668d6bf9bc-6nkx4"
	Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.569030    2511 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.769359    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6-config-volume\") pod \"coredns-668d6bf9bc-7rv77\" (UID: \"d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6\") " pod="kube-system/coredns-668d6bf9bc-7rv77"
	Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.769419    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b65k\" (UniqueName: \"kubernetes.io/projected/d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6-kube-api-access-9b65k\") pod \"coredns-668d6bf9bc-7rv77\" (UID: \"d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6\") " pod="kube-system/coredns-668d6bf9bc-7rv77"
	Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.071836    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/56cf6fce-41be-4b78-9a32-86e8e902d97c-tmp\") pod \"storage-provisioner\" (UID: \"56cf6fce-41be-4b78-9a32-86e8e902d97c\") " pod="kube-system/storage-provisioner"
	Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.071915    2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmp5\" (UniqueName: \"kubernetes.io/projected/56cf6fce-41be-4b78-9a32-86e8e902d97c-kube-api-access-psmp5\") pod \"storage-provisioner\" (UID: \"56cf6fce-41be-4b78-9a32-86e8e902d97c\") " pod="kube-system/storage-provisioner"
	Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.171721    2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d149963c0f5417a4bfd7ff76fba93e74bcbe5c8567fe8c7e92dfc73f237f629"
	Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.177250    2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e922fc5b50a5e3f2fbbdf479a25e30299c64aab3e00b7640846d5596550e0eb"
	Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.296675    2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6784fe75024c94477de9c9dcddf350673b727ec274233c2977c05c966e42d22b"
	Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.344930    2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nwtdt" podStartSLOduration=3.344906105 podStartE2EDuration="3.344906105s" podCreationTimestamp="2025-01-27 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.344724475 +0000 UTC m=+7.619686438" watchObservedRunningTime="2025-01-27 12:44:30.344906105 +0000 UTC m=+7.619868078"
	Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.345051    2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6nkx4" podStartSLOduration=2.345042633 podStartE2EDuration="2.345042633s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.328447552 +0000 UTC m=+7.603409516" watchObservedRunningTime="2025-01-27 12:44:30.345042633 +0000 UTC m=+7.620004596"
	Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.371449    2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7rv77" podStartSLOduration=2.371417563 podStartE2EDuration="2.371417563s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.369935694 +0000 UTC m=+7.644897657" watchObservedRunningTime="2025-01-27 12:44:30.371417563 +0000 UTC m=+7.646379521"
	Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.371647    2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.371633287 podStartE2EDuration="2.371633287s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.36100207 +0000 UTC m=+7.635964033" watchObservedRunningTime="2025-01-27 12:44:30.371633287 +0000 UTC m=+7.646595250"
	Jan 27 12:44:31 offline-docker-649313 kubelet[2511]: I0127 12:44:31.340575    2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jan 27 12:44:31 offline-docker-649313 kubelet[2511]: I0127 12:44:31.340579    2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jan 27 12:44:33 offline-docker-649313 kubelet[2511]: I0127 12:44:33.270132    2511 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 12:44:33 offline-docker-649313 kubelet[2511]: I0127 12:44:33.271156    2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 12:45:00 offline-docker-649313 kubelet[2511]: I0127 12:45:00.524439    2511 scope.go:117] "RemoveContainer" containerID="d47febb25c7020fe2e70988c4383aadc026bfb71145087b6df8688601199e639"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p offline-docker-649313 -n offline-docker-649313
helpers_test.go:261: (dbg) Run:  kubectl --context offline-docker-649313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestOffline FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "offline-docker-649313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-649313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-649313: (2.198024776s)
--- FAIL: TestOffline (904.14s)

                                                
                                    

Test pass (323/345)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.62
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 3.56
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.2
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1
21 TestBinaryMirror 0.75
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.49
29 TestAddons/serial/Volcano 39.24
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 14.64
36 TestAddons/parallel/Ingress 18.17
37 TestAddons/parallel/InspektorGadget 11.79
38 TestAddons/parallel/MetricsServer 5.57
40 TestAddons/parallel/CSI 34.94
41 TestAddons/parallel/Headlamp 22.37
42 TestAddons/parallel/CloudSpanner 5.43
43 TestAddons/parallel/LocalPath 54.04
44 TestAddons/parallel/NvidiaDevicePlugin 6.46
45 TestAddons/parallel/Yakd 11.6
46 TestAddons/parallel/AmdGpuDevicePlugin 6.59
47 TestAddons/StoppedEnableDisable 10.99
48 TestCertOptions 24.71
49 TestCertExpiration 227.83
50 TestDockerFlags 23.7
51 TestForceSystemdFlag 27.45
52 TestForceSystemdEnv 27.17
54 TestKVMDriverInstallOrUpdate 1.26
58 TestErrorSpam/setup 20.79
59 TestErrorSpam/start 0.56
60 TestErrorSpam/status 0.86
61 TestErrorSpam/pause 1.15
62 TestErrorSpam/unpause 1.43
63 TestErrorSpam/stop 1.93
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 60.77
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 31.63
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.13
75 TestFunctional/serial/CacheCmd/cache/add_local 0.66
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.22
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 41.7
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.89
86 TestFunctional/serial/LogsFileCmd 0.91
87 TestFunctional/serial/InvalidService 4.16
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 17.32
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.91
97 TestFunctional/parallel/ServiceCmdConnect 7.66
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 44.85
101 TestFunctional/parallel/SSHCmd 0.62
102 TestFunctional/parallel/CpCmd 1.88
103 TestFunctional/parallel/MySQL 24.28
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.68
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
113 TestFunctional/parallel/License 0.19
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.64
116 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.26
122 TestFunctional/parallel/DockerEnv/bash 1.02
123 TestFunctional/parallel/ServiceCmd/List 0.53
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
136 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
137 TestFunctional/parallel/ImageCommands/Setup 2.25
138 TestFunctional/parallel/ServiceCmd/Format 0.37
139 TestFunctional/parallel/ServiceCmd/URL 0.34
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
141 TestFunctional/parallel/MountCmd/any-port 8.73
142 TestFunctional/parallel/ProfileCmd/profile_list 0.37
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.97
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
154 TestFunctional/parallel/MountCmd/specific-port 2.2
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.11
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 99.43
163 TestMultiControlPlane/serial/DeployApp 5.28
164 TestMultiControlPlane/serial/PingHostFromPods 1.05
165 TestMultiControlPlane/serial/AddWorkerNode 19.38
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
168 TestMultiControlPlane/serial/CopyFile 15.64
169 TestMultiControlPlane/serial/StopSecondaryNode 11.45
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 36.91
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.89
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.21
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
176 TestMultiControlPlane/serial/StopCluster 32.27
177 TestMultiControlPlane/serial/RestartCluster 78.99
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
179 TestMultiControlPlane/serial/AddSecondaryNode 39.02
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
183 TestImageBuild/serial/Setup 21.13
184 TestImageBuild/serial/NormalBuild 1.41
185 TestImageBuild/serial/BuildWithBuildArg 0.83
186 TestImageBuild/serial/BuildWithDockerIgnore 0.59
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.63
191 TestJSONOutput/start/Command 64.15
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.48
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.44
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 10.73
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.2
216 TestKicCustomNetwork/create_custom_network 25.42
217 TestKicCustomNetwork/use_default_bridge_network 22.84
218 TestKicExistingNetwork 22.53
219 TestKicCustomSubnet 25.89
220 TestKicStaticIP 23.43
221 TestMainNoArgs 0.05
222 TestMinikubeProfile 51.09
225 TestMountStart/serial/StartWithMountFirst 6.46
226 TestMountStart/serial/VerifyMountFirst 0.24
227 TestMountStart/serial/StartWithMountSecond 9.27
228 TestMountStart/serial/VerifyMountSecond 0.24
229 TestMountStart/serial/DeleteFirst 1.44
230 TestMountStart/serial/VerifyMountPostDelete 0.24
231 TestMountStart/serial/Stop 1.17
232 TestMountStart/serial/RestartStopped 7.74
233 TestMountStart/serial/VerifyMountPostStop 0.24
236 TestMultiNode/serial/FreshStart2Nodes 73.4
237 TestMultiNode/serial/DeployApp2Nodes 58.28
238 TestMultiNode/serial/PingHostFrom2Pods 0.74
239 TestMultiNode/serial/AddNode 18.04
240 TestMultiNode/serial/MultiNodeLabels 0.06
241 TestMultiNode/serial/ProfileList 0.6
242 TestMultiNode/serial/CopyFile 8.93
243 TestMultiNode/serial/StopNode 2.08
244 TestMultiNode/serial/StartAfterStop 9.61
245 TestMultiNode/serial/RestartKeepsNodes 82.18
246 TestMultiNode/serial/DeleteNode 4.96
247 TestMultiNode/serial/StopMultiNode 21.46
248 TestMultiNode/serial/RestartMultiNode 47.66
249 TestMultiNode/serial/ValidateNameConflict 25.83
254 TestPreload 120.37
256 TestScheduledStopUnix 93.64
257 TestSkaffold 98.42
259 TestInsufficientStorage 10
260 TestRunningBinaryUpgrade 70.92
262 TestKubernetesUpgrade 326.45
263 TestMissingContainerUpgrade 135.41
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
267 TestStoppedBinaryUpgrade/Setup 0.42
274 TestNoKubernetes/serial/StartWithK8s 35.19
275 TestStoppedBinaryUpgrade/Upgrade 110.48
276 TestNoKubernetes/serial/StartWithStopK8s 16.92
277 TestNoKubernetes/serial/Start 5.85
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
279 TestNoKubernetes/serial/ProfileList 1.61
280 TestNoKubernetes/serial/Stop 1.18
281 TestNoKubernetes/serial/StartNoArgs 6.95
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
285 TestPause/serial/Start 72.04
297 TestPause/serial/SecondStartNoReconfiguration 31.24
298 TestPause/serial/Pause 0.56
299 TestPause/serial/VerifyStatus 0.31
300 TestPause/serial/Unpause 0.45
301 TestPause/serial/PauseAgain 0.63
302 TestPause/serial/DeletePaused 2.12
303 TestPause/serial/VerifyDeletedResources 0.71
305 TestStartStop/group/old-k8s-version/serial/FirstStart 153.95
307 TestStartStop/group/no-preload/serial/FirstStart 38.44
309 TestStartStop/group/embed-certs/serial/FirstStart 36.68
310 TestStartStop/group/no-preload/serial/DeployApp 7.25
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
312 TestStartStop/group/no-preload/serial/Stop 10.77
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
314 TestStartStop/group/no-preload/serial/SecondStart 298.74
315 TestStartStop/group/embed-certs/serial/DeployApp 8.28
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
317 TestStartStop/group/embed-certs/serial/Stop 10.75
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
319 TestStartStop/group/embed-certs/serial/SecondStart 262.42
320 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
321 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
322 TestStartStop/group/old-k8s-version/serial/Stop 10.65
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
324 TestStartStop/group/old-k8s-version/serial/SecondStart 137.01
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
327 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
328 TestStartStop/group/old-k8s-version/serial/Pause 2.46
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.5
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.8
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.55
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
340 TestStartStop/group/embed-certs/serial/Pause 2.65
342 TestStartStop/group/newest-cni/serial/FirstStart 33.87
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
345 TestStartStop/group/no-preload/serial/Pause 2.68
346 TestNetworkPlugins/group/auto/Start 63.3
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
349 TestStartStop/group/newest-cni/serial/Stop 10.8
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
351 TestStartStop/group/newest-cni/serial/SecondStart 14.25
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
355 TestStartStop/group/newest-cni/serial/Pause 2.46
356 TestNetworkPlugins/group/custom-flannel/Start 52.61
357 TestNetworkPlugins/group/auto/KubeletFlags 0.26
358 TestNetworkPlugins/group/auto/NetCatPod 10.18
359 TestNetworkPlugins/group/auto/DNS 0.19
360 TestNetworkPlugins/group/auto/Localhost 0.13
361 TestNetworkPlugins/group/auto/HairPin 0.12
362 TestNetworkPlugins/group/false/Start 37.63
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
365 TestNetworkPlugins/group/custom-flannel/DNS 0.14
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
368 TestNetworkPlugins/group/false/KubeletFlags 0.3
369 TestNetworkPlugins/group/false/NetCatPod 10.26
370 TestNetworkPlugins/group/kindnet/Start 61.4
371 TestNetworkPlugins/group/flannel/Start 34.16
372 TestNetworkPlugins/group/false/DNS 0.14
373 TestNetworkPlugins/group/false/Localhost 0.12
374 TestNetworkPlugins/group/false/HairPin 0.14
375 TestNetworkPlugins/group/calico/Start 66.67
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
378 TestNetworkPlugins/group/flannel/NetCatPod 9.27
379 TestNetworkPlugins/group/flannel/DNS 0.15
380 TestNetworkPlugins/group/flannel/Localhost 0.16
381 TestNetworkPlugins/group/flannel/HairPin 0.12
382 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
383 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
384 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
385 TestNetworkPlugins/group/enable-default-cni/Start 64.82
386 TestNetworkPlugins/group/kindnet/DNS 0.14
387 TestNetworkPlugins/group/kindnet/Localhost 0.15
388 TestNetworkPlugins/group/kindnet/HairPin 0.14
389 TestNetworkPlugins/group/calico/ControllerPod 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
391 TestNetworkPlugins/group/bridge/Start 70.06
392 TestNetworkPlugins/group/calico/KubeletFlags 0.3
393 TestNetworkPlugins/group/calico/NetCatPod 11.24
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
397 TestNetworkPlugins/group/calico/DNS 0.14
398 TestNetworkPlugins/group/calico/Localhost 0.13
399 TestNetworkPlugins/group/calico/HairPin 0.13
400 TestNetworkPlugins/group/kubenet/Start 37.74
401 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
402 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
403 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
404 TestNetworkPlugins/group/kubenet/NetCatPod 10.19
405 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
406 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
407 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
408 TestNetworkPlugins/group/kubenet/DNS 0.14
409 TestNetworkPlugins/group/kubenet/Localhost 0.11
410 TestNetworkPlugins/group/kubenet/HairPin 0.12
411 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
412 TestNetworkPlugins/group/bridge/NetCatPod 8.26
413 TestNetworkPlugins/group/bridge/DNS 0.12
414 TestNetworkPlugins/group/bridge/Localhost 0.12
415 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (4.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-509391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-509391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.620475085s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 12:08:33.573493  311307 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0127 12:08:33.573626  311307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-509391
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-509391: exit status 85 (61.486475ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-509391 | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC |          |
	|         | -p download-only-509391        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:08:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:08:28.993762  311319 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:08:28.994035  311319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:08:28.994046  311319 out.go:358] Setting ErrFile to fd 2...
	I0127 12:08:28.994061  311319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:08:28.994287  311319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	W0127 12:08:28.994457  311319 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20317-304536/.minikube/config/config.json: open /home/jenkins/minikube-integration/20317-304536/.minikube/config/config.json: no such file or directory
	I0127 12:08:28.995056  311319 out.go:352] Setting JSON to true
	I0127 12:08:28.995903  311319 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28256,"bootTime":1737951453,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:08:28.996015  311319 start.go:139] virtualization: kvm guest
	I0127 12:08:28.998347  311319 out.go:97] [download-only-509391] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:08:28.998491  311319 notify.go:220] Checking for updates...
	W0127 12:08:28.998516  311319 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 12:08:28.999577  311319 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:08:29.000815  311319 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:08:29.001919  311319 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:08:29.002986  311319 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:08:29.004055  311319 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 12:08:29.005941  311319 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:08:29.006152  311319 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:08:29.026111  311319 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:08:29.026183  311319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:08:29.353735  311319 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-27 12:08:29.345262277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:08:29.353894  311319 docker.go:318] overlay module found
	I0127 12:08:29.355372  311319 out.go:97] Using the docker driver based on user configuration
	I0127 12:08:29.355393  311319 start.go:297] selected driver: docker
	I0127 12:08:29.355399  311319 start.go:901] validating driver "docker" against <nil>
	I0127 12:08:29.355480  311319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:08:29.401005  311319 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-27 12:08:29.392967114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:08:29.401190  311319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:08:29.401868  311319 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0127 12:08:29.402058  311319 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:08:29.403867  311319 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-509391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-509391
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-369888 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-369888 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.558291431s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 12:08:37.515038  311307 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:08:37.515098  311307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-369888
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-369888: exit status 85 (63.794106ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-509391 | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC |                     |
	|         | -p download-only-509391        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:08 UTC |
	| delete  | -p download-only-509391        | download-only-509391 | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:08 UTC |
	| start   | -o=json --download-only        | download-only-369888 | jenkins | v1.35.0 | 27 Jan 25 12:08 UTC |                     |
	|         | -p download-only-369888        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:08:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:08:33.996861  311671 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:08:33.997087  311671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:08:33.997095  311671 out.go:358] Setting ErrFile to fd 2...
	I0127 12:08:33.997099  311671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:08:33.997284  311671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:08:33.997804  311671 out.go:352] Setting JSON to true
	I0127 12:08:33.998595  311671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28261,"bootTime":1737951453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:08:33.998659  311671 start.go:139] virtualization: kvm guest
	I0127 12:08:34.000614  311671 out.go:97] [download-only-369888] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:08:34.000745  311671 notify.go:220] Checking for updates...
	I0127 12:08:34.002088  311671 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:08:34.003342  311671 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:08:34.004507  311671 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:08:34.005558  311671 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:08:34.006694  311671 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 12:08:34.008674  311671 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:08:34.008873  311671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:08:34.033065  311671 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:08:34.033129  311671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:08:34.079176  311671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-01-27 12:08:34.070967293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:08:34.079284  311671 docker.go:318] overlay module found
	I0127 12:08:34.080822  311671 out.go:97] Using the docker driver based on user configuration
	I0127 12:08:34.080845  311671 start.go:297] selected driver: docker
	I0127 12:08:34.080850  311671 start.go:901] validating driver "docker" against <nil>
	I0127 12:08:34.080930  311671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:08:34.128221  311671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-01-27 12:08:34.119450872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:08:34.128411  311671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:08:34.128926  311671 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0127 12:08:34.129071  311671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:08:34.130732  311671 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-369888 host does not exist
	  To start a cluster, run: "minikube start -p download-only-369888"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-369888
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-983745 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-983745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-983745
--- PASS: TestDownloadOnlyKic (1.00s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 12:08:39.163772  311307 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-297147 --alsologtostderr --binary-mirror http://127.0.0.1:37621 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-297147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-297147
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-467520
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-467520: exit status 85 (52.111748ms)

                                                
                                                
-- stdout --
	* Profile "addons-467520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-467520
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-467520: exit status 85 (53.335763ms)

                                                
                                                
-- stdout --
	* Profile "addons-467520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-467520 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-467520 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.490407833s)
--- PASS: TestAddons/Setup (208.49s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 12.240684ms
addons_test.go:807: volcano-scheduler stabilized in 12.503364ms
addons_test.go:815: volcano-admission stabilized in 12.638283ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-nrfsv" [466bf3ec-0d41-4256-8f26-5a5d5dc0bbe8] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003542688s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-sfplc" [ca468575-a0e9-4520-b1d2-5822f0164d2a] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004006297s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-pln5d" [1e6b9157-7e5c-474a-b4fa-d796ea745c7d] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00458425s
addons_test.go:842: (dbg) Run:  kubectl --context addons-467520 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-467520 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-467520 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [6a5b8b16-0139-4c7d-a43c-e14a29d344a9] Pending
helpers_test.go:344: "test-job-nginx-0" [6a5b8b16-0139-4c7d-a43c-e14a29d344a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [6a5b8b16-0139-4c7d-a43c-e14a29d344a9] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003087207s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable volcano --alsologtostderr -v=1: (10.892251338s)
--- PASS: TestAddons/serial/Volcano (39.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-467520 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-467520 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-467520 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-467520 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa860e9e-4f5b-485d-ac58-23096c568af0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa860e9e-4f5b-485d-ac58-23096c568af0] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003483565s
addons_test.go:633: (dbg) Run:  kubectl --context addons-467520 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-467520 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-467520 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.72907ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-xjbsr" [d17464d9-429f-42be-9f7b-5ed68223bd2e] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002528572s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xkdbh" [d35435c7-be2e-4745-9721-9ee50dabc156] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003764824s
addons_test.go:331: (dbg) Run:  kubectl --context addons-467520 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-467520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-467520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.922030018s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 ip
2025/01/27 12:13:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-467520 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-467520 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-467520 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [91557b32-0193-4bc8-b9c9-3258b1f73026] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [91557b32-0193-4bc8-b9c9-3258b1f73026] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004411357s
I0127 12:13:38.260717  311307 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-467520 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable ingress-dns --alsologtostderr -v=1: (1.499485871s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable ingress --alsologtostderr -v=1: (7.575824435s)
--- PASS: TestAddons/parallel/Ingress (18.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hhrk6" [9964aa8d-963b-40c3-ac4f-589baec23501] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00410771s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable inspektor-gadget --alsologtostderr -v=1: (5.781868934s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.553425ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-66scp" [c895cfb6-a5bd-4f41-b042-9860116dda26] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003293232s
addons_test.go:402: (dbg) Run:  kubectl --context addons-467520 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 12:13:26.123529  311307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 12:13:26.127897  311307 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 12:13:26.127920  311307 kapi.go:107] duration metric: took 4.402492ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.411296ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-467520 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-467520 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [edb2397b-7c7e-45e4-afa9-69ddf76ea0c5] Pending
helpers_test.go:344: "task-pv-pod" [edb2397b-7c7e-45e4-afa9-69ddf76ea0c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [edb2397b-7c7e-45e4-afa9-69ddf76ea0c5] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00353367s
addons_test.go:511: (dbg) Run:  kubectl --context addons-467520 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-467520 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-467520 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-467520 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-467520 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [053e15de-ff00-4b5c-8da6-e3277810321b] Pending
helpers_test.go:344: "task-pv-pod-restore" [053e15de-ff00-4b5c-8da6-e3277810321b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00363668s
addons_test.go:553: (dbg) Run:  kubectl --context addons-467520 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-467520 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-467520 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.544726229s)
--- PASS: TestAddons/parallel/CSI (34.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-467520 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-rjk5j" [4b873b40-0453-46bb-b9ec-7112e762c12e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-rjk5j" [4b873b40-0453-46bb-b9ec-7112e762c12e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003306838s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable headlamp --alsologtostderr -v=1: (5.708359559s)
--- PASS: TestAddons/parallel/Headlamp (22.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-64dlq" [3abbcb69-8f33-4dab-baea-9e0cdf94fe06] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003327181s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-467520 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-467520 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [aa1ae870-e24b-43fe-a4cd-e98877a32913] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [aa1ae870-e24b-43fe-a4cd-e98877a32913] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [aa1ae870-e24b-43fe-a4cd-e98877a32913] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003890759s
addons_test.go:906: (dbg) Run:  kubectl --context addons-467520 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 ssh "cat /opt/local-path-provisioner/pvc-0e86559a-ff29-4010-930c-753f5e9c9440_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-467520 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-467520 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.13706934s)
--- PASS: TestAddons/parallel/LocalPath (54.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p8whf" [190f4d67-01dd-4b2c-aa0c-6b9b342f989e] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003498771s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-f2rbr" [e1904381-d882-4949-95a5-9ced8b663b85] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004116633s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-467520 addons disable yakd --alsologtostderr -v=1: (5.599439205s)
--- PASS: TestAddons/parallel/Yakd (11.60s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-kxsjd" [0507b0ed-56b4-4514-bc92-798d59dc60ed] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003528037s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-467520 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-467520
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-467520: (10.746421628s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-467520
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-467520
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-467520
--- PASS: TestAddons/StoppedEnableDisable (10.99s)

                                                
                                    
x
+
TestCertOptions (24.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-013110 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-013110 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (21.984381607s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-013110 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-013110 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-013110 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-013110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-013110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-013110: (2.109860003s)
--- PASS: TestCertOptions (24.71s)

                                                
                                    
x
+
TestCertExpiration (227.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879419 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0127 12:47:08.461777  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:13.702739  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879419 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (23.594897842s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879419 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879419 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (22.032388071s)
helpers_test.go:175: Cleaning up "cert-expiration-879419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-879419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-879419: (2.20203088s)
--- PASS: TestCertExpiration (227.83s)

                                                
                                    
x
+
TestDockerFlags (23.7s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-297879 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-297879 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.465132604s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-297879 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-297879 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-297879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-297879
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-297879: (1.687451485s)
--- PASS: TestDockerFlags (23.70s)

                                                
                                    
x
+
TestForceSystemdFlag (27.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-941720 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-941720 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.130319603s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-941720 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-941720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-941720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-941720: (2.016135283s)
--- PASS: TestForceSystemdFlag (27.45s)

                                                
                                    
x
+
TestForceSystemdEnv (27.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-773311 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-773311 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.754349552s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-773311 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-773311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-773311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-773311: (2.085582233s)
--- PASS: TestForceSystemdEnv (27.17s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 12:46:36.523958  311307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:46:36.524132  311307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 12:46:36.552712  311307 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 12:46:36.553080  311307 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:46:36.553135  311307 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3297409985/001/docker-machine-driver-kvm2
I0127 12:46:36.709536  311307 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3297409985/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00078e010 gz:0xc00078e018 tar:0xc00043ffb0 tar.bz2:0xc00043ffc0 tar.gz:0xc00043ffd0 tar.xz:0xc00043ffe0 tar.zst:0xc00078e000 tbz2:0xc00043ffc0 tgz:0xc00043ffd0 txz:0xc00043ffe0 tzst:0xc00078e000 xz:0xc00078e020 zip:0xc00078e030 zst:0xc00078e028] Getters:map[file:0xc001938660 http:0xc000bae910 https:0xc000bae960] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:46:36.709601  311307 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3297409985/001/docker-machine-driver-kvm2
I0127 12:46:37.292669  311307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:46:37.292763  311307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 12:46:37.322152  311307 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 12:46:37.322189  311307 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 12:46:37.322256  311307 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 12:46:37.322296  311307 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3297409985/002/docker-machine-driver-kvm2
I0127 12:46:37.343308  311307 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3297409985/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00078e010 gz:0xc00078e018 tar:0xc00043ffb0 tar.bz2:0xc00043ffc0 tar.gz:0xc00043ffd0 tar.xz:0xc00043ffe0 tar.zst:0xc00078e000 tbz2:0xc00043ffc0 tgz:0xc00043ffd0 txz:0xc00043ffe0 tzst:0xc00078e000 xz:0xc00078e020 zip:0xc00078e030 zst:0xc00078e028] Getters:map[file:0xc0006ba680 http:0xc0008bd590 https:0xc0008bd5e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 12:46:37.343353  311307 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3297409985/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.26s)

                                                
                                    
x
+
TestErrorSpam/setup (20.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-186571 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-186571 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-186571 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-186571 --driver=docker  --container-runtime=docker: (20.79120767s)
--- PASS: TestErrorSpam/setup (20.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 stop: (1.746322362s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-186571 --log_dir /tmp/nospam-186571 stop
--- PASS: TestErrorSpam/stop (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/test/nested/copy/311307/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-953711 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m0.76760961s)
--- PASS: TestFunctional/serial/StartWithProxy (60.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 12:15:49.361419  311307 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-953711 --alsologtostderr -v=8: (31.630950667s)
functional_test.go:663: soft start took 31.631715956s for "functional-953711" cluster.
I0127 12:16:20.992746  311307 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (31.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-953711 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-953711 /tmp/TestFunctionalserialCacheCmdcacheadd_local1687809021/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache add minikube-local-cache-test:functional-953711
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache delete minikube-local-cache-test:functional-953711
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-953711
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (258.727915ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 kubectl -- --context functional-953711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-953711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-953711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.699699837s)
functional_test.go:761: restart took 41.699844098s for "functional-953711" cluster.
I0127 12:17:07.483918  311307 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-953711 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 logs
--- PASS: TestFunctional/serial/LogsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 logs --file /tmp/TestFunctionalserialLogsFileCmd3533091492/001/logs.txt
E0127 12:17:08.461829  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.468235  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.479610  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.501212  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.542633  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.624132  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:08.785651  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:09.107354  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-953711 apply -f testdata/invalidsvc.yaml
E0127 12:17:09.748672  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:17:11.030337  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-953711
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-953711: exit status 115 (326.662859ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31365 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-953711 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 config get cpus: exit status 14 (75.958905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 config get cpus: exit status 14 (61.006595ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-953711 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-953711 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 364492: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-953711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (159.455663ms)

                                                
                                                
-- stdout --
	* [functional-953711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:17:28.328649  362748 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:17:28.328964  362748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:17:28.328977  362748 out.go:358] Setting ErrFile to fd 2...
	I0127 12:17:28.328983  362748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:17:28.329186  362748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:17:28.329761  362748 out.go:352] Setting JSON to false
	I0127 12:17:28.331029  362748 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28795,"bootTime":1737951453,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:17:28.331141  362748 start.go:139] virtualization: kvm guest
	I0127 12:17:28.333516  362748 out.go:177] * [functional-953711] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:17:28.334775  362748 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:17:28.334775  362748 notify.go:220] Checking for updates...
	I0127 12:17:28.337168  362748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:17:28.338529  362748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:17:28.340736  362748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:17:28.342162  362748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:17:28.343384  362748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:17:28.345186  362748 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:17:28.345882  362748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:17:28.373180  362748 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:17:28.373289  362748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:17:28.423314  362748 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2025-01-27 12:17:28.414596912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:17:28.423420  362748 docker.go:318] overlay module found
	I0127 12:17:28.425935  362748 out.go:177] * Using the docker driver based on existing profile
	I0127 12:17:28.427008  362748 start.go:297] selected driver: docker
	I0127 12:17:28.427020  362748 start.go:901] validating driver "docker" against &{Name:functional-953711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-953711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:17:28.427116  362748 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:17:28.429002  362748 out.go:201] 
	W0127 12:17:28.430107  362748 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 12:17:28.431197  362748 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-953711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-953711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (156.316129ms)

                                                
                                                
-- stdout --
	* [functional-953711] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:17:28.694722  363030 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:17:28.694827  363030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:17:28.694838  363030 out.go:358] Setting ErrFile to fd 2...
	I0127 12:17:28.694845  363030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:17:28.695141  363030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:17:28.695682  363030 out.go:352] Setting JSON to false
	I0127 12:17:28.696794  363030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28796,"bootTime":1737951453,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:17:28.696928  363030 start.go:139] virtualization: kvm guest
	I0127 12:17:28.698809  363030 out.go:177] * [functional-953711] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 12:17:28.700423  363030 notify.go:220] Checking for updates...
	I0127 12:17:28.700431  363030 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:17:28.701878  363030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:17:28.703063  363030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	I0127 12:17:28.704436  363030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	I0127 12:17:28.705610  363030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:17:28.706797  363030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:17:28.708441  363030 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:17:28.709059  363030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:17:28.738152  363030 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:17:28.738240  363030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:17:28.790964  363030 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-27 12:17:28.7817325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:17:28.791065  363030 docker.go:318] overlay module found
	I0127 12:17:28.792843  363030 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 12:17:28.794151  363030 start.go:297] selected driver: docker
	I0127 12:17:28.794166  363030 start.go:901] validating driver "docker" against &{Name:functional-953711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-953711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:17:28.794266  363030 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:17:28.796085  363030 out.go:201] 
	W0127 12:17:28.797146  363030 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 12:17:28.798204  363030 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-953711 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-953711 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vrpg5" [29dd2cba-50ca-46a9-a077-b050424749dd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vrpg5" [29dd2cba-50ca-46a9-a077-b050424749dd] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.03230752s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32111
functional_test.go:1675: http://192.168.49.2:32111: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-vrpg5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32111
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [51889cec-40bc-4e51-8e77-8c2d31a801da] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004464892s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-953711 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-953711 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-953711 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-953711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [81035599-0e3c-40c9-aaa4-4dcda1a7615c] Pending
helpers_test.go:344: "sp-pod" [81035599-0e3c-40c9-aaa4-4dcda1a7615c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [81035599-0e3c-40c9-aaa4-4dcda1a7615c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.012792936s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-953711 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-953711 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-953711 delete -f testdata/storage-provisioner/pod.yaml: (1.90461537s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-953711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3241b06a-8041-4202-b054-27f5dd48722a] Pending
helpers_test.go:344: "sp-pod" [3241b06a-8041-4202-b054-27f5dd48722a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3241b06a-8041-4202-b054-27f5dd48722a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003078223s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-953711 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh -n functional-953711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cp functional-953711:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd594988716/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh -n functional-953711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh -n functional-953711 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-953711 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-lk8tp" [6260e519-1f52-49dc-90fb-57d8042345a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-lk8tp" [6260e519-1f52-49dc-90fb-57d8042345a3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003908859s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;": exit status 1 (139.702101ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:17:48.381097  311307 retry.go:31] will retry after 1.042670801s: exit status 1
2025/01/27 12:17:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1807: (dbg) Run:  kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;"
E0127 12:17:49.439762  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;": exit status 1 (108.539772ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:17:49.533104  311307 retry.go:31] will retry after 2.145647152s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;": exit status 1 (108.354366ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:17:51.787949  311307 retry.go:31] will retry after 1.451292583s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-953711 exec mysql-58ccfd96bb-lk8tp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/311307/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /etc/test/nested/copy/311307/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/311307.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /etc/ssl/certs/311307.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/311307.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /usr/share/ca-certificates/311307.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3113072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /etc/ssl/certs/3113072.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3113072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /usr/share/ca-certificates/3113072.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-953711 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh "sudo systemctl is-active crio": exit status 1 (260.469982ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-953711 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-953711 expose deployment hello-node --type=NodePort --port=8080
E0127 12:17:13.592254  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-vf6m6" [0076235d-cd1c-46c7-9b4a-733c8cf15463] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-vf6m6" [0076235d-cd1c-46c7-9b4a-733c8cf15463] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00372635s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 358453: os: process already finished
helpers_test.go:502: unable to terminate pid 357981: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-953711 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [baf2997e-8bf1-49af-980e-cc8ce4cf9651] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [baf2997e-8bf1-49af-980e-cc8ce4cf9651] Running
E0127 12:17:18.715949  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003551005s
I0127 12:17:24.634279  311307 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-953711 docker-env) && out/minikube-linux-amd64 status -p functional-953711"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-953711 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service list -o json
functional_test.go:1494: Took "488.27256ms" to run "out/minikube-linux-amd64 -p functional-953711 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-953711 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.48.166 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-953711 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31975
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-953711 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-953711
docker.io/kicbase/echo-server:functional-953711
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-953711 image ls --format short --alsologtostderr:
I0127 12:17:39.007339  366397 out.go:345] Setting OutFile to fd 1 ...
I0127 12:17:39.007482  366397 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.007493  366397 out.go:358] Setting ErrFile to fd 2...
I0127 12:17:39.007500  366397 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.007715  366397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:17:39.008392  366397 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.008530  366397 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.008938  366397 cli_runner.go:164] Run: docker container inspect functional-953711 --format={{.State.Status}}
I0127 12:17:39.029473  366397 ssh_runner.go:195] Run: systemctl --version
I0127 12:17:39.029546  366397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-953711
I0127 12:17:39.050220  366397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/functional-953711/id_rsa Username:docker}
I0127 12:17:39.144621  366397 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-953711 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-953711 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| docker.io/library/nginx                     | alpine            | 93f9c72967dbc | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/minikube-local-cache-test | functional-953711 | 54e536eed6a51 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| docker.io/library/nginx                     | latest            | 9bea9f2796e23 | 192MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| localhost/my-image                          | functional-953711 | ebdedf97e3647 | 1.24MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-953711 image ls --format table --alsologtostderr:
I0127 12:17:42.956470  367454 out.go:345] Setting OutFile to fd 1 ...
I0127 12:17:42.956605  367454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:42.956618  367454 out.go:358] Setting ErrFile to fd 2...
I0127 12:17:42.956624  367454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:42.956850  367454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:17:42.957395  367454 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:42.957492  367454 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:42.957881  367454 cli_runner.go:164] Run: docker container inspect functional-953711 --format={{.State.Status}}
I0127 12:17:42.980157  367454 ssh_runner.go:195] Run: systemctl --version
I0127 12:17:42.980257  367454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-953711
I0127 12:17:43.001032  367454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/functional-953711/id_rsa Username:docker}
I0127 12:17:43.089633  367454 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-953711 image ls --format json --alsologtostderr:
[{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":
[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"ebdedf97e36472e64be1e66e52f3ad42c7354a5e02087fa4fda30d99b33bc692","repoDigests":[],"repoTags":["localhost/my-image:functional-953711"],"size":"1240000"},{"id":"54e536eed6a51fad2e51135749b69557b424041907ae4ec384fbc334008a2bd8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-953711"],"size":"30"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-953711"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr
.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-953711 image ls --format json --alsologtostderr:
I0127 12:17:42.659496  367231 out.go:345] Setting OutFile to fd 1 ...
I0127 12:17:42.659606  367231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:42.659615  367231 out.go:358] Setting ErrFile to fd 2...
I0127 12:17:42.659620  367231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:42.659815  367231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:17:42.660417  367231 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:42.660521  367231 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:42.660895  367231 cli_runner.go:164] Run: docker container inspect functional-953711 --format={{.State.Status}}
I0127 12:17:42.681914  367231 ssh_runner.go:195] Run: systemctl --version
I0127 12:17:42.681969  367231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-953711
I0127 12:17:42.701488  367231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/functional-953711/id_rsa Username:docker}
I0127 12:17:42.869420  367231 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-953711 image ls --format yaml --alsologtostderr:
- id: 93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-953711
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 54e536eed6a51fad2e51135749b69557b424041907ae4ec384fbc334008a2bd8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-953711
size: "30"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-953711 image ls --format yaml --alsologtostderr:
I0127 12:17:39.233289  366449 out.go:345] Setting OutFile to fd 1 ...
I0127 12:17:39.233414  366449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.233424  366449 out.go:358] Setting ErrFile to fd 2...
I0127 12:17:39.233431  366449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.233665  366449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:17:39.234336  366449 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.234443  366449 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.234890  366449 cli_runner.go:164] Run: docker container inspect functional-953711 --format={{.State.Status}}
I0127 12:17:39.252776  366449 ssh_runner.go:195] Run: systemctl --version
I0127 12:17:39.252836  366449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-953711
I0127 12:17:39.270555  366449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/functional-953711/id_rsa Username:docker}
I0127 12:17:39.360978  366449 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh pgrep buildkitd: exit status 1 (275.25841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image build -t localhost/my-image:functional-953711 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-953711 image build -t localhost/my-image:functional-953711 testdata/build --alsologtostderr: (2.706959885s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-953711 image build -t localhost/my-image:functional-953711 testdata/build --alsologtostderr:
I0127 12:17:39.721787  366607 out.go:345] Setting OutFile to fd 1 ...
I0127 12:17:39.721955  366607 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.721970  366607 out.go:358] Setting ErrFile to fd 2...
I0127 12:17:39.721976  366607 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:17:39.722274  366607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:17:39.723029  366607 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.723682  366607 config.go:182] Loaded profile config "functional-953711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:17:39.724395  366607 cli_runner.go:164] Run: docker container inspect functional-953711 --format={{.State.Status}}
I0127 12:17:39.744469  366607 ssh_runner.go:195] Run: systemctl --version
I0127 12:17:39.744536  366607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-953711
I0127 12:17:39.763816  366607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/functional-953711/id_rsa Username:docker}
I0127 12:17:39.856992  366607 build_images.go:161] Building image from path: /tmp/build.3299713882.tar
I0127 12:17:39.857053  366607 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 12:17:39.866609  366607 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3299713882.tar
I0127 12:17:39.870754  366607 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3299713882.tar: stat -c "%s %y" /var/lib/minikube/build/build.3299713882.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3299713882.tar': No such file or directory
I0127 12:17:39.870790  366607 ssh_runner.go:362] scp /tmp/build.3299713882.tar --> /var/lib/minikube/build/build.3299713882.tar (3072 bytes)
I0127 12:17:39.897959  366607 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3299713882
I0127 12:17:39.907553  366607 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3299713882 -xf /var/lib/minikube/build/build.3299713882.tar
I0127 12:17:39.916598  366607 docker.go:360] Building image: /var/lib/minikube/build/build.3299713882
I0127 12:17:39.916672  366607 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-953711 /var/lib/minikube/build/build.3299713882
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ebdedf97e36472e64be1e66e52f3ad42c7354a5e02087fa4fda30d99b33bc692 done
#8 naming to localhost/my-image:functional-953711 done
#8 DONE 0.0s
I0127 12:17:42.315626  366607 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-953711 /var/lib/minikube/build/build.3299713882: (2.398920589s)
I0127 12:17:42.315720  366607 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3299713882
I0127 12:17:42.328126  366607 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3299713882.tar
I0127 12:17:42.366664  366607 build_images.go:217] Built localhost/my-image:functional-953711 from /tmp/build.3299713882.tar
I0127 12:17:42.366717  366607 build_images.go:133] succeeded building to: functional-953711
I0127 12:17:42.366725  366607 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.23058629s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-953711
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31975
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdany-port3020083970/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737980245769856646" to /tmp/TestFunctionalparallelMountCmdany-port3020083970/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737980245769856646" to /tmp/TestFunctionalparallelMountCmdany-port3020083970/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737980245769856646" to /tmp/TestFunctionalparallelMountCmdany-port3020083970/001/test-1737980245769856646
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.570384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:17:26.045712  311307 retry.go:31] will retry after 317.192475ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 12:17 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 12:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 12:17 test-1737980245769856646
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh cat /mount-9p/test-1737980245769856646
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-953711 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [453e360e-6186-4741-bfdc-d77e59b4cc83] Pending
helpers_test.go:344: "busybox-mount" [453e360e-6186-4741-bfdc-d77e59b4cc83] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [453e360e-6186-4741-bfdc-d77e59b4cc83] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [453e360e-6186-4741-bfdc-d77e59b4cc83] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003609453s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-953711 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdany-port3020083970/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "316.87743ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.22993ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "311.721637ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.383256ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image load --daemon kicbase/echo-server:functional-953711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image load --daemon kicbase/echo-server:functional-953711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0127 12:17:28.957547  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-953711
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image load --daemon kicbase/echo-server:functional-953711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image save kicbase/echo-server:functional-953711 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image rm kicbase/echo-server:functional-953711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-953711
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 image save --daemon kicbase/echo-server:functional-953711 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-953711
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdspecific-port3691975365/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.816225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:17:34.878806  311307 retry.go:31] will retry after 720.123605ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdspecific-port3691975365/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-953711 ssh "sudo umount -f /mount-9p": exit status 1 (293.035886ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-953711 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdspecific-port3691975365/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-953711 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-953711 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-953711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557742381/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-953711
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-953711
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-953711
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-179870 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0127 12:18:30.402469  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-179870 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.767473016s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-179870 -- rollout status deployment/busybox: (3.369637617s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-c8x55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-j98qg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-jbr9m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-c8x55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-j98qg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-jbr9m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-c8x55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-j98qg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-jbr9m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-c8x55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-c8x55 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-j98qg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-j98qg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-jbr9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-179870 -- exec busybox-58667487b6-jbr9m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-179870 -v=7 --alsologtostderr
E0127 12:19:52.324366  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-179870 -v=7 --alsologtostderr: (18.574578503s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-179870 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp testdata/cp-test.txt ha-179870:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile620857878/001/cp-test_ha-179870.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870:/home/docker/cp-test.txt ha-179870-m02:/home/docker/cp-test_ha-179870_ha-179870-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test_ha-179870_ha-179870-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870:/home/docker/cp-test.txt ha-179870-m03:/home/docker/cp-test_ha-179870_ha-179870-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test_ha-179870_ha-179870-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870:/home/docker/cp-test.txt ha-179870-m04:/home/docker/cp-test_ha-179870_ha-179870-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test_ha-179870_ha-179870-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp testdata/cp-test.txt ha-179870-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile620857878/001/cp-test_ha-179870-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m02:/home/docker/cp-test.txt ha-179870:/home/docker/cp-test_ha-179870-m02_ha-179870.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test_ha-179870-m02_ha-179870.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m02:/home/docker/cp-test.txt ha-179870-m03:/home/docker/cp-test_ha-179870-m02_ha-179870-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test_ha-179870-m02_ha-179870-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m02:/home/docker/cp-test.txt ha-179870-m04:/home/docker/cp-test_ha-179870-m02_ha-179870-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test_ha-179870-m02_ha-179870-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp testdata/cp-test.txt ha-179870-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile620857878/001/cp-test_ha-179870-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m03:/home/docker/cp-test.txt ha-179870:/home/docker/cp-test_ha-179870-m03_ha-179870.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test_ha-179870-m03_ha-179870.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m03:/home/docker/cp-test.txt ha-179870-m02:/home/docker/cp-test_ha-179870-m03_ha-179870-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test_ha-179870-m03_ha-179870-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m03:/home/docker/cp-test.txt ha-179870-m04:/home/docker/cp-test_ha-179870-m03_ha-179870-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test_ha-179870-m03_ha-179870-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp testdata/cp-test.txt ha-179870-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile620857878/001/cp-test_ha-179870-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m04:/home/docker/cp-test.txt ha-179870:/home/docker/cp-test_ha-179870-m04_ha-179870.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870 "sudo cat /home/docker/cp-test_ha-179870-m04_ha-179870.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m04:/home/docker/cp-test.txt ha-179870-m02:/home/docker/cp-test_ha-179870-m04_ha-179870-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m02 "sudo cat /home/docker/cp-test_ha-179870-m04_ha-179870-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 cp ha-179870-m04:/home/docker/cp-test.txt ha-179870-m03:/home/docker/cp-test_ha-179870-m04_ha-179870-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 ssh -n ha-179870-m03 "sudo cat /home/docker/cp-test_ha-179870-m04_ha-179870-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-179870 node stop m02 -v=7 --alsologtostderr: (10.816558048s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr: exit status 7 (637.400624ms)

                                                
                                                
-- stdout --
	ha-179870
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-179870-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179870-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-179870-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:20:34.652750  395079 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:20:34.652874  395079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:20:34.652883  395079 out.go:358] Setting ErrFile to fd 2...
	I0127 12:20:34.652888  395079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:20:34.653043  395079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:20:34.653236  395079 out.go:352] Setting JSON to false
	I0127 12:20:34.653264  395079 mustload.go:65] Loading cluster: ha-179870
	I0127 12:20:34.653398  395079 notify.go:220] Checking for updates...
	I0127 12:20:34.653705  395079 config.go:182] Loaded profile config "ha-179870": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:20:34.653731  395079 status.go:174] checking status of ha-179870 ...
	I0127 12:20:34.654276  395079 cli_runner.go:164] Run: docker container inspect ha-179870 --format={{.State.Status}}
	I0127 12:20:34.671189  395079 status.go:371] ha-179870 host status = "Running" (err=<nil>)
	I0127 12:20:34.671215  395079 host.go:66] Checking if "ha-179870" exists ...
	I0127 12:20:34.671429  395079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179870
	I0127 12:20:34.689433  395079 host.go:66] Checking if "ha-179870" exists ...
	I0127 12:20:34.689721  395079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:20:34.689780  395079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179870
	I0127 12:20:34.706063  395079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/ha-179870/id_rsa Username:docker}
	I0127 12:20:34.793188  395079 ssh_runner.go:195] Run: systemctl --version
	I0127 12:20:34.797144  395079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:20:34.807465  395079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:20:34.855565  395079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-27 12:20:34.846813598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:20:34.856405  395079 kubeconfig.go:125] found "ha-179870" server: "https://192.168.49.254:8443"
	I0127 12:20:34.856445  395079 api_server.go:166] Checking apiserver status ...
	I0127 12:20:34.856499  395079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:20:34.867582  395079 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2372/cgroup
	I0127 12:20:34.875947  395079 api_server.go:182] apiserver freezer: "12:freezer:/docker/9bfb7b2785387a9102fb83af5099c821e7214fa2994c333a2014ade8b8cd1ef4/kubepods/burstable/pod3dfb29da110905a53a8ceecbb8e5ad3d/839633ae4dd01743b7904c9b38e8e074db1ea1f60665e977ae5353f931a73be6"
	I0127 12:20:34.876015  395079 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9bfb7b2785387a9102fb83af5099c821e7214fa2994c333a2014ade8b8cd1ef4/kubepods/burstable/pod3dfb29da110905a53a8ceecbb8e5ad3d/839633ae4dd01743b7904c9b38e8e074db1ea1f60665e977ae5353f931a73be6/freezer.state
	I0127 12:20:34.884161  395079 api_server.go:204] freezer state: "THAWED"
	I0127 12:20:34.884227  395079 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 12:20:34.888421  395079 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 12:20:34.888445  395079 status.go:463] ha-179870 apiserver status = Running (err=<nil>)
	I0127 12:20:34.888455  395079 status.go:176] ha-179870 status: &{Name:ha-179870 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:20:34.888469  395079 status.go:174] checking status of ha-179870-m02 ...
	I0127 12:20:34.888704  395079 cli_runner.go:164] Run: docker container inspect ha-179870-m02 --format={{.State.Status}}
	I0127 12:20:34.906062  395079 status.go:371] ha-179870-m02 host status = "Stopped" (err=<nil>)
	I0127 12:20:34.906083  395079 status.go:384] host is not running, skipping remaining checks
	I0127 12:20:34.906089  395079 status.go:176] ha-179870-m02 status: &{Name:ha-179870-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:20:34.906106  395079 status.go:174] checking status of ha-179870-m03 ...
	I0127 12:20:34.906345  395079 cli_runner.go:164] Run: docker container inspect ha-179870-m03 --format={{.State.Status}}
	I0127 12:20:34.922866  395079 status.go:371] ha-179870-m03 host status = "Running" (err=<nil>)
	I0127 12:20:34.922897  395079 host.go:66] Checking if "ha-179870-m03" exists ...
	I0127 12:20:34.923162  395079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179870-m03
	I0127 12:20:34.939224  395079 host.go:66] Checking if "ha-179870-m03" exists ...
	I0127 12:20:34.939459  395079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:20:34.939511  395079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179870-m03
	I0127 12:20:34.955754  395079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/ha-179870-m03/id_rsa Username:docker}
	I0127 12:20:35.045095  395079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:20:35.055830  395079 kubeconfig.go:125] found "ha-179870" server: "https://192.168.49.254:8443"
	I0127 12:20:35.055860  395079 api_server.go:166] Checking apiserver status ...
	I0127 12:20:35.055895  395079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:20:35.065719  395079 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2341/cgroup
	I0127 12:20:35.074051  395079 api_server.go:182] apiserver freezer: "12:freezer:/docker/adc3140985bea144d99c497c35f28b4b4d6abdb5ddbfaada1c7526591ab3cd33/kubepods/burstable/pod201484c1dfba1291c395080550011303/e893e217f243ef36f419b8c100c4ef08316782a447ec5b63ec93125e3181f24c"
	I0127 12:20:35.074112  395079 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/adc3140985bea144d99c497c35f28b4b4d6abdb5ddbfaada1c7526591ab3cd33/kubepods/burstable/pod201484c1dfba1291c395080550011303/e893e217f243ef36f419b8c100c4ef08316782a447ec5b63ec93125e3181f24c/freezer.state
	I0127 12:20:35.081887  395079 api_server.go:204] freezer state: "THAWED"
	I0127 12:20:35.081911  395079 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 12:20:35.085842  395079 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 12:20:35.085866  395079 status.go:463] ha-179870-m03 apiserver status = Running (err=<nil>)
	I0127 12:20:35.085877  395079 status.go:176] ha-179870-m03 status: &{Name:ha-179870-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:20:35.085900  395079 status.go:174] checking status of ha-179870-m04 ...
	I0127 12:20:35.086139  395079 cli_runner.go:164] Run: docker container inspect ha-179870-m04 --format={{.State.Status}}
	I0127 12:20:35.103560  395079 status.go:371] ha-179870-m04 host status = "Running" (err=<nil>)
	I0127 12:20:35.103588  395079 host.go:66] Checking if "ha-179870-m04" exists ...
	I0127 12:20:35.103842  395079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179870-m04
	I0127 12:20:35.122464  395079 host.go:66] Checking if "ha-179870-m04" exists ...
	I0127 12:20:35.122756  395079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:20:35.122795  395079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179870-m04
	I0127 12:20:35.139982  395079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/ha-179870-m04/id_rsa Username:docker}
	I0127 12:20:35.229067  395079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:20:35.239455  395079 status.go:176] ha-179870-m04 status: &{Name:ha-179870-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-179870 node start m02 -v=7 --alsologtostderr: (35.904291763s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-179870 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-179870 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-179870 -v=7 --alsologtostderr: (33.647829067s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-179870 --wait=true -v=7 --alsologtostderr
E0127 12:22:08.465200  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.702811  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.709191  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.720827  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.742194  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.784039  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:13.866030  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:14.027579  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:14.349655  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:14.991268  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:16.272576  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:18.834584  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:23.956886  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:34.198713  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:36.166111  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:22:54.680286  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:35.643023  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-179870 --wait=true -v=7 --alsologtostderr: (2m49.134698194s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-179870
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-179870 node delete m03 -v=7 --alsologtostderr: (8.467416224s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 stop -v=7 --alsologtostderr
E0127 12:24:57.565183  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-179870 stop -v=7 --alsologtostderr: (32.172392856s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr: exit status 7 (102.047637ms)

                                                
                                                
-- stdout --
	ha-179870
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179870-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179870-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:25:18.679671  425929 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:25:18.679771  425929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:25:18.679779  425929 out.go:358] Setting ErrFile to fd 2...
	I0127 12:25:18.679783  425929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:25:18.679968  425929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:25:18.680140  425929 out.go:352] Setting JSON to false
	I0127 12:25:18.680172  425929 mustload.go:65] Loading cluster: ha-179870
	I0127 12:25:18.680251  425929 notify.go:220] Checking for updates...
	I0127 12:25:18.680640  425929 config.go:182] Loaded profile config "ha-179870": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:25:18.680661  425929 status.go:174] checking status of ha-179870 ...
	I0127 12:25:18.681119  425929 cli_runner.go:164] Run: docker container inspect ha-179870 --format={{.State.Status}}
	I0127 12:25:18.700209  425929 status.go:371] ha-179870 host status = "Stopped" (err=<nil>)
	I0127 12:25:18.700230  425929 status.go:384] host is not running, skipping remaining checks
	I0127 12:25:18.700236  425929 status.go:176] ha-179870 status: &{Name:ha-179870 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:25:18.700256  425929 status.go:174] checking status of ha-179870-m02 ...
	I0127 12:25:18.700482  425929 cli_runner.go:164] Run: docker container inspect ha-179870-m02 --format={{.State.Status}}
	I0127 12:25:18.717127  425929 status.go:371] ha-179870-m02 host status = "Stopped" (err=<nil>)
	I0127 12:25:18.717150  425929 status.go:384] host is not running, skipping remaining checks
	I0127 12:25:18.717159  425929 status.go:176] ha-179870-m02 status: &{Name:ha-179870-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:25:18.717183  425929 status.go:174] checking status of ha-179870-m04 ...
	I0127 12:25:18.717453  425929 cli_runner.go:164] Run: docker container inspect ha-179870-m04 --format={{.State.Status}}
	I0127 12:25:18.734091  425929 status.go:371] ha-179870-m04 host status = "Stopped" (err=<nil>)
	I0127 12:25:18.734113  425929 status.go:384] host is not running, skipping remaining checks
	I0127 12:25:18.734119  425929 status.go:176] ha-179870-m04 status: &{Name:ha-179870-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-179870 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-179870 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.228619799s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-179870 --control-plane -v=7 --alsologtostderr
E0127 12:27:08.462184  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:13.703379  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-179870 --control-plane -v=7 --alsologtostderr: (38.185864006s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-179870 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-350159 --driver=docker  --container-runtime=docker
E0127 12:27:41.408343  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-350159 --driver=docker  --container-runtime=docker: (21.131648068s)
--- PASS: TestImageBuild/serial/Setup (21.13s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-350159
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-350159: (1.408306326s)
--- PASS: TestImageBuild/serial/NormalBuild (1.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-350159
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-350159
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-350159
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-729686 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-729686 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m4.151268395s)
--- PASS: TestJSONOutput/start/Command (64.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-729686 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-729686 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-729686 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-729686 --output=json --user=testUser: (10.730064932s)
--- PASS: TestJSONOutput/stop/Command (10.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-289046 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-289046 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.334622ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d59b91c-4105-4c8f-ab60-217d829ee34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-289046] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5abeb417-c493-483c-9139-8ec5c1e98f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"6b658214-527b-48e3-b179-6c1898a8aeba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31b8f494-a537-4abe-8d3f-8b182d3231b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig"}}
	{"specversion":"1.0","id":"9da7660e-65a1-41e9-814c-7b9d5701065e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube"}}
	{"specversion":"1.0","id":"fa9de68e-8dea-4bc4-970d-afe1da68561f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e3fa579f-0c6e-4a0a-ab00-01b35b720412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8cf71553-ffdc-4ce4-9e68-649af33899e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-289046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-289046
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-656764 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-656764 --network=: (23.430976684s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-656764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-656764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-656764: (1.96830648s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-238417 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-238417 --network=bridge: (20.917785476s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-238417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-238417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-238417: (1.883578382s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.84s)

                                                
                                    
x
+
TestKicExistingNetwork (22.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 12:29:58.485060  311307 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 12:29:58.500568  311307 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 12:29:58.500647  311307 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 12:29:58.500669  311307 cli_runner.go:164] Run: docker network inspect existing-network
W0127 12:29:58.515899  311307 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 12:29:58.515927  311307 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 12:29:58.515944  311307 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 12:29:58.516080  311307 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:29:58.531386  311307 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a67733940b1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:47:92:de:9e} reservation:<nil>}
I0127 12:29:58.531892  311307 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f357c0}
I0127 12:29:58.531922  311307 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0127 12:29:58.531958  311307 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 12:29:58.588255  311307 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-280162 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-280162 --network=existing-network: (20.551447377s)
helpers_test.go:175: Cleaning up "existing-network-280162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-280162
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-280162: (1.842252934s)
I0127 12:30:20.999920  311307 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.53s)

                                                
                                    
x
+
TestKicCustomSubnet (25.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-870586 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-870586 --subnet=192.168.60.0/24: (23.842643255s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-870586 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-870586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-870586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-870586: (2.03223143s)
--- PASS: TestKicCustomSubnet (25.89s)

                                                
                                    
x
+
TestKicStaticIP (23.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-512284 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-512284 --static-ip=192.168.200.200: (21.268722067s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-512284 ip
helpers_test.go:175: Cleaning up "static-ip-512284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-512284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-512284: (2.03842465s)
--- PASS: TestKicStaticIP (23.43s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-399335 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-399335 --driver=docker  --container-runtime=docker: (23.46515626s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-416759 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-416759 --driver=docker  --container-runtime=docker: (22.39861717s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-399335
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-416759
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-416759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-416759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-416759: (2.043985483s)
helpers_test.go:175: Cleaning up "first-399335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-399335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-399335: (2.027931899s)
--- PASS: TestMinikubeProfile (51.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-377464 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-377464 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.458279898s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-377464 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-395800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0127 12:32:08.461869  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:13.704349  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-395800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.269567629s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-377464 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-377464 --alsologtostderr -v=5: (1.443166123s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-395800
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-395800: (1.168871649s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-395800
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-395800: (6.741074995s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-713141 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0127 12:33:31.529348  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-713141 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.922545211s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (58.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-713141 -- rollout status deployment/busybox: (2.533471108s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:46.371438  311307 retry.go:31] will retry after 571.866017ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:47.058949  311307 retry.go:31] will retry after 1.402684416s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:48.575920  311307 retry.go:31] will retry after 2.273874958s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:50.965612  311307 retry.go:31] will retry after 2.289439877s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:53.367181  311307 retry.go:31] will retry after 6.396834656s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:33:59.878954  311307 retry.go:31] will retry after 6.684581738s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:34:06.678833  311307 retry.go:31] will retry after 9.594722271s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0127 12:34:16.402468  311307 retry.go:31] will retry after 24.24971538s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-th75q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-vtx4m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-th75q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-vtx4m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-th75q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-vtx4m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (58.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-th75q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-th75q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-vtx4m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-713141 -- exec busybox-58667487b6-vtx4m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-713141 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-713141 -v 3 --alsologtostderr: (17.434307019s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-713141 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp testdata/cp-test.txt multinode-713141:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3552976911/001/cp-test_multinode-713141.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141:/home/docker/cp-test.txt multinode-713141-m02:/home/docker/cp-test_multinode-713141_multinode-713141-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test_multinode-713141_multinode-713141-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141:/home/docker/cp-test.txt multinode-713141-m03:/home/docker/cp-test_multinode-713141_multinode-713141-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test_multinode-713141_multinode-713141-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp testdata/cp-test.txt multinode-713141-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3552976911/001/cp-test_multinode-713141-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m02:/home/docker/cp-test.txt multinode-713141:/home/docker/cp-test_multinode-713141-m02_multinode-713141.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test_multinode-713141-m02_multinode-713141.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m02:/home/docker/cp-test.txt multinode-713141-m03:/home/docker/cp-test_multinode-713141-m02_multinode-713141-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test_multinode-713141-m02_multinode-713141-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp testdata/cp-test.txt multinode-713141-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3552976911/001/cp-test_multinode-713141-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m03:/home/docker/cp-test.txt multinode-713141:/home/docker/cp-test_multinode-713141-m03_multinode-713141.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141 "sudo cat /home/docker/cp-test_multinode-713141-m03_multinode-713141.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 cp multinode-713141-m03:/home/docker/cp-test.txt multinode-713141-m02:/home/docker/cp-test_multinode-713141-m03_multinode-713141-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 ssh -n multinode-713141-m02 "sudo cat /home/docker/cp-test_multinode-713141-m03_multinode-713141-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-713141 node stop m03: (1.170303265s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-713141 status: exit status 7 (455.618695ms)

                                                
                                                
-- stdout --
	multinode-713141
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-713141-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-713141-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr: exit status 7 (456.062417ms)

                                                
                                                
-- stdout --
	multinode-713141
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-713141-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-713141-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:35:11.872537  513398 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:35:11.872652  513398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:11.872661  513398 out.go:358] Setting ErrFile to fd 2...
	I0127 12:35:11.872665  513398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:35:11.872852  513398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:35:11.873016  513398 out.go:352] Setting JSON to false
	I0127 12:35:11.873050  513398 mustload.go:65] Loading cluster: multinode-713141
	I0127 12:35:11.873195  513398 notify.go:220] Checking for updates...
	I0127 12:35:11.873604  513398 config.go:182] Loaded profile config "multinode-713141": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:35:11.873631  513398 status.go:174] checking status of multinode-713141 ...
	I0127 12:35:11.874240  513398 cli_runner.go:164] Run: docker container inspect multinode-713141 --format={{.State.Status}}
	I0127 12:35:11.891519  513398 status.go:371] multinode-713141 host status = "Running" (err=<nil>)
	I0127 12:35:11.891550  513398 host.go:66] Checking if "multinode-713141" exists ...
	I0127 12:35:11.891788  513398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-713141
	I0127 12:35:11.907830  513398 host.go:66] Checking if "multinode-713141" exists ...
	I0127 12:35:11.908122  513398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:35:11.908207  513398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-713141
	I0127 12:35:11.924969  513398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/multinode-713141/id_rsa Username:docker}
	I0127 12:35:12.012976  513398 ssh_runner.go:195] Run: systemctl --version
	I0127 12:35:12.016725  513398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:12.026886  513398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:35:12.075883  513398 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-01-27 12:35:12.066949337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 12:35:12.076499  513398 kubeconfig.go:125] found "multinode-713141" server: "https://192.168.67.2:8443"
	I0127 12:35:12.076532  513398 api_server.go:166] Checking apiserver status ...
	I0127 12:35:12.076573  513398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:12.087481  513398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2355/cgroup
	I0127 12:35:12.095972  513398 api_server.go:182] apiserver freezer: "12:freezer:/docker/39688a899722841a4919c24a87ec7445c0165b87779f4318269d90601a6186a3/kubepods/burstable/podb5e0541f4d272d4025df81660fa9017f/ce103cc4d561fc1cc5654db724f24ec9feb19376fadc229f257dd339bd899d85"
	I0127 12:35:12.096051  513398 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/39688a899722841a4919c24a87ec7445c0165b87779f4318269d90601a6186a3/kubepods/burstable/podb5e0541f4d272d4025df81660fa9017f/ce103cc4d561fc1cc5654db724f24ec9feb19376fadc229f257dd339bd899d85/freezer.state
	I0127 12:35:12.104707  513398 api_server.go:204] freezer state: "THAWED"
	I0127 12:35:12.104735  513398 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 12:35:12.108560  513398 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 12:35:12.108583  513398 status.go:463] multinode-713141 apiserver status = Running (err=<nil>)
	I0127 12:35:12.108598  513398 status.go:176] multinode-713141 status: &{Name:multinode-713141 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:35:12.108616  513398 status.go:174] checking status of multinode-713141-m02 ...
	I0127 12:35:12.108851  513398 cli_runner.go:164] Run: docker container inspect multinode-713141-m02 --format={{.State.Status}}
	I0127 12:35:12.131018  513398 status.go:371] multinode-713141-m02 host status = "Running" (err=<nil>)
	I0127 12:35:12.131046  513398 host.go:66] Checking if "multinode-713141-m02" exists ...
	I0127 12:35:12.131425  513398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-713141-m02
	I0127 12:35:12.148001  513398 host.go:66] Checking if "multinode-713141-m02" exists ...
	I0127 12:35:12.148316  513398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:35:12.148357  513398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-713141-m02
	I0127 12:35:12.165093  513398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/multinode-713141-m02/id_rsa Username:docker}
	I0127 12:35:12.253202  513398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:35:12.263423  513398 status.go:176] multinode-713141-m02 status: &{Name:multinode-713141-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:35:12.263456  513398 status.go:174] checking status of multinode-713141-m03 ...
	I0127 12:35:12.263720  513398 cli_runner.go:164] Run: docker container inspect multinode-713141-m03 --format={{.State.Status}}
	I0127 12:35:12.280330  513398 status.go:371] multinode-713141-m03 host status = "Stopped" (err=<nil>)
	I0127 12:35:12.280372  513398 status.go:384] host is not running, skipping remaining checks
	I0127 12:35:12.280384  513398 status.go:176] multinode-713141-m03 status: &{Name:multinode-713141-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-713141 node start m03 -v=7 --alsologtostderr: (8.948899914s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-713141
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-713141
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-713141: (22.267741361s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-713141 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-713141 --wait=true -v=8 --alsologtostderr: (59.818313057s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-713141
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-713141 node delete m03: (4.398616346s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 stop
E0127 12:37:08.465045  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-713141 stop: (21.288061067s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-713141 status: exit status 7 (86.24436ms)

                                                
                                                
-- stdout --
	multinode-713141
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-713141-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr: exit status 7 (84.849327ms)

                                                
                                                
-- stdout --
	multinode-713141
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-713141-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:37:10.457199  528478 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:37:10.457330  528478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:37:10.457340  528478 out.go:358] Setting ErrFile to fd 2...
	I0127 12:37:10.457347  528478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:37:10.457544  528478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
	I0127 12:37:10.457755  528478 out.go:352] Setting JSON to false
	I0127 12:37:10.457796  528478 mustload.go:65] Loading cluster: multinode-713141
	I0127 12:37:10.457915  528478 notify.go:220] Checking for updates...
	I0127 12:37:10.458324  528478 config.go:182] Loaded profile config "multinode-713141": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:37:10.458352  528478 status.go:174] checking status of multinode-713141 ...
	I0127 12:37:10.458866  528478 cli_runner.go:164] Run: docker container inspect multinode-713141 --format={{.State.Status}}
	I0127 12:37:10.476258  528478 status.go:371] multinode-713141 host status = "Stopped" (err=<nil>)
	I0127 12:37:10.476288  528478 status.go:384] host is not running, skipping remaining checks
	I0127 12:37:10.476295  528478 status.go:176] multinode-713141 status: &{Name:multinode-713141 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:37:10.476349  528478 status.go:174] checking status of multinode-713141-m02 ...
	I0127 12:37:10.476584  528478 cli_runner.go:164] Run: docker container inspect multinode-713141-m02 --format={{.State.Status}}
	I0127 12:37:10.492511  528478 status.go:371] multinode-713141-m02 host status = "Stopped" (err=<nil>)
	I0127 12:37:10.492534  528478 status.go:384] host is not running, skipping remaining checks
	I0127 12:37:10.492542  528478 status.go:176] multinode-713141-m02 status: &{Name:multinode-713141-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-713141 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0127 12:37:13.703173  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-713141 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (47.090707216s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-713141 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-713141
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-713141-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-713141-m02 --driver=docker  --container-runtime=docker: exit status 14 (64.359649ms)

                                                
                                                
-- stdout --
	* [multinode-713141-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-713141-m02' is duplicated with machine name 'multinode-713141-m02' in profile 'multinode-713141'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-713141-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-713141-m03 --driver=docker  --container-runtime=docker: (23.37617248s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-713141
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-713141: exit status 80 (263.423807ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-713141 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-713141-m03 already exists in multinode-713141-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-713141-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-713141-m03: (2.076956745s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.83s)

                                                
                                    
x
+
TestPreload (120.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443592 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0127 12:38:36.770637  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443592 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m28.711768017s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443592 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-443592 image pull gcr.io/k8s-minikube/busybox: (1.430384906s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-443592
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-443592: (10.597667996s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443592 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443592 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (17.272331082s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443592 image list
helpers_test.go:175: Cleaning up "test-preload-443592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-443592
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-443592: (2.155348311s)
--- PASS: TestPreload (120.37s)

                                                
                                    
x
+
TestScheduledStopUnix (93.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-757698 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-757698 --memory=2048 --driver=docker  --container-runtime=docker: (20.708850431s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757698 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-757698 -n scheduled-stop-757698
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757698 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 12:40:49.249132  311307 retry.go:31] will retry after 124.369µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.250265  311307 retry.go:31] will retry after 215.591µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.251408  311307 retry.go:31] will retry after 294.224µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.252533  311307 retry.go:31] will retry after 454.931µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.253655  311307 retry.go:31] will retry after 737.955µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.254780  311307 retry.go:31] will retry after 584.381µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.255888  311307 retry.go:31] will retry after 932.379µs: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.256998  311307 retry.go:31] will retry after 1.312932ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.259285  311307 retry.go:31] will retry after 3.697259ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.263508  311307 retry.go:31] will retry after 4.618781ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.268724  311307 retry.go:31] will retry after 8.468778ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.277957  311307 retry.go:31] will retry after 11.345599ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.290221  311307 retry.go:31] will retry after 12.022681ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.302422  311307 retry.go:31] will retry after 18.784802ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
I0127 12:40:49.321677  311307 retry.go:31] will retry after 20.940596ms: open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/scheduled-stop-757698/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757698 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757698 -n scheduled-stop-757698
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-757698
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757698 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-757698
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-757698: exit status 7 (68.491266ms)

                                                
                                                
-- stdout --
	scheduled-stop-757698
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757698 -n scheduled-stop-757698
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757698 -n scheduled-stop-757698: exit status 7 (69.252856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-757698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-757698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-757698: (1.631579276s)
--- PASS: TestScheduledStopUnix (93.64s)

                                                
                                    
x
+
TestSkaffold (98.42s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3344716312 version
skaffold_test.go:63: skaffold version: v2.14.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-260687 --memory=2600 --driver=docker  --container-runtime=docker
E0127 12:42:08.464420  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:42:13.704340  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-260687 --memory=2600 --driver=docker  --container-runtime=docker: (22.898647382s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3344716312 run --minikube-profile skaffold-260687 --kube-context skaffold-260687 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3344716312 run --minikube-profile skaffold-260687 --kube-context skaffold-260687 --status-check=true --port-forward=false --interactive=false: (1m0.873221465s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b7b444c7d-xflxh" [9f0d98e9-0f5b-4268-9f9a-c594832be892] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004004389s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58d66b569-9gtms" [7a04595a-442e-4931-bca7-64660d18fd9a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003249092s
helpers_test.go:175: Cleaning up "skaffold-260687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-260687
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-260687: (2.746624995s)
--- PASS: TestSkaffold (98.42s)

                                                
                                    
x
+
TestInsufficientStorage (10s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-677507 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-677507 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.828763477s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb1c09df-ae2b-472a-9c14-213327431e5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-677507] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc68cb1b-0a5a-4362-a531-968c3d1438f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"dbfa033f-ec36-458b-a7b7-ed0600f76c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f3d51f1f-5c82-4a8c-94cc-22b25d8965c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig"}}
	{"specversion":"1.0","id":"2a4de8fb-f202-45ee-8119-8ff6d3766c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube"}}
	{"specversion":"1.0","id":"f7b113fe-84bd-4e4d-a8d2-1d24308f5ffd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"02b64352-a1c4-489d-a45f-497196363009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a330028-121a-4677-8051-c402cbd80fc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c2c56de8-ee81-47ed-828b-1f6fb6e81993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5f99d74b-0222-4b3c-8f11-17941bdc9b9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f85ad5e-928c-46b1-bfb7-ded77ca504c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a0433dbe-a8cf-4a7d-a068-294f5aa0c54c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-677507\" primary control-plane node in \"insufficient-storage-677507\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"795d286a-a05a-4575-8a8a-d0bdcd35c878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f93148b-83a8-4229-ba95-7e46d870f941","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"404b13e2-11f4-4be3-8b32-3c97d98f4a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-677507 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-677507 --output=json --layout=cluster: exit status 7 (255.588398ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-677507","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-677507","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:43:48.276368  569006 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-677507" does not appear in /home/jenkins/minikube-integration/20317-304536/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-677507 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-677507 --output=json --layout=cluster: exit status 7 (258.936668ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-677507","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-677507","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:43:48.535140  569104 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-677507" does not appear in /home/jenkins/minikube-integration/20317-304536/kubeconfig
	E0127 12:43:48.545141  569104 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/insufficient-storage-677507/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-677507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-677507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-677507: (1.65642676s)
--- PASS: TestInsufficientStorage (10.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1737945880 start -p running-upgrade-007764 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0127 12:48:26.433761  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.440451  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.451888  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.473568  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.515928  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.597688  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:26.759973  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:27.081403  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:27.723565  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:29.005208  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:31.566966  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:36.688721  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:46.930282  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1737945880 start -p running-upgrade-007764 --memory=2200 --vm-driver=docker  --container-runtime=docker: (29.429815811s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-007764 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0127 12:49:07.412032  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-007764 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.87005594s)
helpers_test.go:175: Cleaning up "running-upgrade-007764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-007764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-007764: (2.136935324s)
--- PASS: TestRunningBinaryUpgrade (70.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (326.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.471155211s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-039542
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-039542: (1.275693824s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-039542 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-039542 status --format={{.Host}}: exit status 7 (66.461918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.712723261s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-039542 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (68.878794ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-039542] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-039542
	    minikube start -p kubernetes-upgrade-039542 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0395422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-039542 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0127 12:50:11.531171  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-039542 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.528571357s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-039542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-039542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-039542: (2.263338535s)
--- PASS: TestKubernetesUpgrade (326.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (135.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3981394531 start -p missing-upgrade-149623 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3981394531 start -p missing-upgrade-149623 --memory=2200 --driver=docker  --container-runtime=docker: (1m12.750403626s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-149623
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-149623: (11.683692109s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-149623
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-149623 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-149623 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.420281962s)
helpers_test.go:175: Cleaning up "missing-upgrade-149623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-149623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-149623: (2.13419588s)
--- PASS: TestMissingContainerUpgrade (135.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (69.637962ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-654361] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654361 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654361 --driver=docker  --container-runtime=docker: (34.875457636s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654361 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2873543773 start -p stopped-upgrade-731552 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2873543773 start -p stopped-upgrade-731552 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m10.507546027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2873543773 -p stopped-upgrade-731552 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2873543773 -p stopped-upgrade-731552 stop: (14.065963318s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-731552 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-731552 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.910126253s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --driver=docker  --container-runtime=docker: (14.906983718s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654361 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-654361 status -o json: exit status 2 (284.432838ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-654361","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-654361
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-654361: (1.7294457s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654361 --no-kubernetes --driver=docker  --container-runtime=docker: (5.845877723s)
--- PASS: TestNoKubernetes/serial/Start (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.549501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-654361
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-654361: (1.183622714s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654361 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654361 --driver=docker  --container-runtime=docker: (6.949920923s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.891913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-731552
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-731552: (1.319830873s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (72.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746522 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-746522 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m12.044234828s)
--- PASS: TestPause/serial/Start (72.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746522 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-746522 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.227031489s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-746522 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-746522 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-746522 --output=json --layout=cluster: exit status 2 (314.134565ms)

                                                
                                                
-- stdout --
	{"Name":"pause-746522","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-746522","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.45s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-746522 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.45s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-746522 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-746522 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-746522 --alsologtostderr -v=5: (2.122982188s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-746522
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-746522: exit status 1 (16.485281ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-746522: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-174699 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0127 12:49:48.373870  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-174699 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.948359715s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (38.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-592739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-592739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (38.439455195s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (38.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (36.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-136618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-136618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (36.675649989s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (36.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-592739 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f023421a-b860-447d-a5ae-5100001fb9e3] Pending
helpers_test.go:344: "busybox" [f023421a-b860-447d-a5ae-5100001fb9e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f023421a-b860-447d-a5ae-5100001fb9e3] Running
E0127 12:51:10.295882  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004075772s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-592739 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-592739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-592739 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-592739 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-592739 --alsologtostderr -v=3: (10.767143498s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-592739 -n no-preload-592739
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-592739 -n no-preload-592739: exit status 7 (103.334139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-592739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-592739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-592739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (4m58.398471561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-592739 -n no-preload-592739
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-136618 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e3648002-fa39-4316-aae5-234c4a9ccc98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e3648002-fa39-4316-aae5-234c4a9ccc98] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004056483s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-136618 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-136618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-136618 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-136618 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-136618 --alsologtostderr -v=3: (10.746270764s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136618 -n embed-certs-136618
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136618 -n embed-certs-136618: exit status 7 (121.200086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-136618 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-136618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-136618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (4m22.079695153s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-136618 -n embed-certs-136618
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174699 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [321d2ca8-0ace-474c-a60f-ca5bd8470932] Pending
E0127 12:52:08.461951  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [321d2ca8-0ace-474c-a60f-ca5bd8470932] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [321d2ca8-0ace-474c-a60f-ca5bd8470932] Running
E0127 12:52:13.703522  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004140559s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-174699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-174699 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-174699 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-174699 --alsologtostderr -v=3: (10.651853908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174699 -n old-k8s-version-174699
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174699 -n old-k8s-version-174699: exit status 7 (70.819435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-174699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (137.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-174699 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0127 12:53:26.433925  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:54.137534  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-174699 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m16.70714539s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174699 -n old-k8s-version-174699
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (137.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bpwxf" [58f55ff4-6b75-46bf-9172-ca8ebb3725ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003682635s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bpwxf" [58f55ff4-6b75-46bf-9172-ca8ebb3725ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003261538s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-174699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-174699 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-174699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174699 -n old-k8s-version-174699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174699 -n old-k8s-version-174699: exit status 2 (297.406519ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174699 -n old-k8s-version-174699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174699 -n old-k8s-version-174699: exit status 2 (298.639119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-174699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174699 -n old-k8s-version-174699
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174699 -n old-k8s-version-174699
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-359066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
E0127 12:55:16.772756  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-359066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (41.495284395s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-359066 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c8ee1bb1-0852-48c6-95a8-4913bcd5649e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c8ee1bb1-0852-48c6-95a8-4913bcd5649e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004116698s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-359066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-359066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-359066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-359066 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-359066 --alsologtostderr -v=3: (10.801846521s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066: exit status 7 (113.607243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-359066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-359066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-359066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (4m23.165127546s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r9s84" [8bfa5048-0e83-4816-a8cb-1758c0a4833e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003770633s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r9s84" [8bfa5048-0e83-4816-a8cb-1758c0a4833e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004254664s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-136618 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vp4ff" [8d0b1847-4e39-4ecf-a4dc-99758bbd92ba] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004165256s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-136618 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-136618 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136618 -n embed-certs-136618
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136618 -n embed-certs-136618: exit status 2 (330.352652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-136618 -n embed-certs-136618
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-136618 -n embed-certs-136618: exit status 2 (340.314709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-136618 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-136618 -n embed-certs-136618
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-136618 -n embed-certs-136618
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-942234 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-942234 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (33.867764308s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vp4ff" [8d0b1847-4e39-4ecf-a4dc-99758bbd92ba] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004155265s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-592739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-592739 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-592739 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-592739 -n no-preload-592739
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-592739 -n no-preload-592739: exit status 2 (363.236674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-592739 -n no-preload-592739
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-592739 -n no-preload-592739: exit status 2 (362.60855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-592739 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-592739 -n no-preload-592739
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-592739 -n no-preload-592739
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m3.304637049s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-942234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-942234 --alsologtostderr -v=3
E0127 12:57:08.351927  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.358305  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.369643  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.391027  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.432393  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.461781  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/addons-467520/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.514150  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.675678  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:08.997799  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:09.639882  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:10.921933  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:13.484069  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:57:13.702634  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/functional-953711/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-942234 --alsologtostderr -v=3: (10.800778327s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-942234 -n newest-cni-942234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-942234 -n newest-cni-942234: exit status 7 (125.294491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-942234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-942234 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1
E0127 12:57:18.606115  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-942234 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.1: (13.906943357s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-942234 -n newest-cni-942234
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-942234 image list --format=json
E0127 12:57:28.848020  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-942234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-942234 -n newest-cni-942234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-942234 -n newest-cni-942234: exit status 2 (287.359255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-942234 -n newest-cni-942234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-942234 -n newest-cni-942234: exit status 2 (283.181282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-942234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-942234 -n newest-cni-942234
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-942234 -n newest-cni-942234
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (52.610542164s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-244099 "pgrep -a kubelet"
I0127 12:57:42.914322  311307 config.go:182] Loaded profile config "auto-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h5kf6" [359ba313-5891-4ee3-a3ba-cdf9a9a4f90e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h5kf6" [359ba313-5891-4ee3-a3ba-cdf9a9a4f90e] Running
E0127 12:57:49.330358  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004278815s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (37.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (37.625368706s)
--- PASS: TestNetworkPlugins/group/false/Start (37.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-244099 "pgrep -a kubelet"
E0127 12:58:26.434538  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/skaffold-260687/client.crt: no such file or directory" logger="UnhandledError"
I0127 12:58:26.659747  311307 config.go:182] Loaded profile config "custom-flannel-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-49r84" [4bcc0943-bb32-4824-ba2f-d856fa79cc61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 12:58:30.292416  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/old-k8s-version-174699/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-49r84" [4bcc0943-bb32-4824-ba2f-d856fa79cc61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004032825s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-244099 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-d2jlf" [4e3e7ef7-01a3-4580-aeab-56c7546ce208] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-d2jlf" [4e3e7ef7-01a3-4580-aeab-56c7546ce208] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003942393s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m1.40429712s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (34.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (34.164133195s)
--- PASS: TestNetworkPlugins/group/flannel/Start (34.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m6.666863417s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f2zcd" [6d3d35ae-108a-4794-a603-5de71de8a625] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004780842s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-244099 "pgrep -a kubelet"
I0127 12:59:38.027415  311307 config.go:182] Loaded profile config "flannel-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-26rpz" [5cb33b5d-b9e3-47c7-96a1-3e68b7836c49] Pending
helpers_test.go:344: "netcat-5d86dc444-26rpz" [5cb33b5d-b9e3-47c7-96a1-3e68b7836c49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-26rpz" [5cb33b5d-b9e3-47c7-96a1-3e68b7836c49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004016408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lpjds" [fcc3edb5-523b-41d2-b886-47b430c02033] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004782982s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-244099 "pgrep -a kubelet"
I0127 13:00:02.100292  311307 config.go:182] Loaded profile config "kindnet-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tgvtm" [39e07bbf-c67e-4b94-aa50-b7ede2e5f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tgvtm" [39e07bbf-c67e-4b94-aa50-b7ede2e5f0b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.021875182s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m4.820142561s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-78qpl" [f48412d7-7f5f-4cb7-99a5-3960d62f6821] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004869286s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nwgj2" [b0231a95-a5b7-4217-a38f-bb3eda7f615f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003537581s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m10.064816461s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-244099 "pgrep -a kubelet"
I0127 13:00:33.397243  311307 config.go:182] Loaded profile config "calico-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qcd4z" [c0e2eed3-c81f-4584-afa6-a6969d274011] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qcd4z" [c0e2eed3-c81f-4584-afa6-a6969d274011] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004275007s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nwgj2" [b0231a95-a5b7-4217-a38f-bb3eda7f615f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004132252s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-359066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-359066 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-359066 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066: exit status 2 (300.316157ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066: exit status 2 (319.194965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-359066 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-359066 -n default-k8s-diff-port-359066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)
E0127 13:01:05.776914  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:01:06.418993  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:01:07.701104  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:01:10.262853  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (37.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-244099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (37.739735914s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (37.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-244099 "pgrep -a kubelet"
I0127 13:01:14.497453  311307 config.go:182] Loaded profile config "enable-default-cni-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t46nn" [7d1374c3-eef2-4e57-b115-ce5a5c71392f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 13:01:15.384680  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-t46nn" [7d1374c3-eef2-4e57-b115-ce5a5c71392f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004401375s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-244099 "pgrep -a kubelet"
I0127 13:01:23.418138  311307 config.go:182] Loaded profile config "kubenet-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-d2cng" [206507b2-af97-428b-89d2-c6f407ad27f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-d2cng" [206507b2-af97-428b-89d2-c6f407ad27f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003372058s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-244099 "pgrep -a kubelet"
I0127 13:01:41.522956  311307 config.go:182] Loaded profile config "bridge-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-244099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kzlwv" [93831b7a-844a-4ca3-afaf-34eb688718e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kzlwv" [93831b7a-844a-4ca3-afaf-34eb688718e4] Running
E0127 13:01:46.108661  311307 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/no-preload-592739/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00441055s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-244099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-244099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (21/345)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-246844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-246844
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-244099 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-244099" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:45:46 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-039542
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-649313
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:46:20 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-746522
contexts:
- context:
cluster: kubernetes-upgrade-039542
user: kubernetes-upgrade-039542
name: kubernetes-upgrade-039542
- context:
cluster: offline-docker-649313
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-docker-649313
name: offline-docker-649313
- context:
cluster: pause-746522
extensions:
- extension:
last-update: Mon, 27 Jan 2025 12:46:20 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-746522
name: pause-746522
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-039542
user:
client-certificate: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/kubernetes-upgrade-039542/client.crt
client-key: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/kubernetes-upgrade-039542/client.key
- name: offline-docker-649313
user:
client-certificate: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt
client-key: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key
- name: pause-746522
user:
client-certificate: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/pause-746522/client.crt
client-key: /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/pause-746522/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-244099

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-244099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244099"

                                                
                                                
----------------------- debugLogs end: cilium-244099 [took: 3.304171146s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-244099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-244099
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
Copied to clipboard