Test Report: Docker_Linux_containerd 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (1/326)

Order failed test Duration
351 TestNetworkPlugins/group/calico/Start 907.56
x
+
TestNetworkPlugins/group/calico/Start (907.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (15m7.515767437s)

                                                
                                                
-- stdout --
	* [calico-964891] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-964891" primary control-plane node in "calico-964891" cluster
	* Pulling base image v0.0.47-1756980985-21488 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:10:54.092056 1703230 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:10:54.092199 1703230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:10:54.092211 1703230 out.go:374] Setting ErrFile to fd 2...
	I0908 14:10:54.092217 1703230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:10:54.092464 1703230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 14:10:54.093114 1703230 out.go:368] Setting JSON to false
	I0908 14:10:54.094558 1703230 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13998,"bootTime":1757326656,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:10:54.094687 1703230 start.go:140] virtualization: kvm guest
	I0908 14:10:54.096815 1703230 out.go:179] * [calico-964891] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:10:54.098434 1703230 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:10:54.098428 1703230 notify.go:220] Checking for updates...
	I0908 14:10:54.100096 1703230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:10:54.101775 1703230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 14:10:54.103257 1703230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 14:10:54.104691 1703230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:10:54.106112 1703230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:10:54.108050 1703230 config.go:182] Loaded profile config "auto-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:10:54.108200 1703230 config.go:182] Loaded profile config "default-k8s-diff-port-288682": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:10:54.108294 1703230 config.go:182] Loaded profile config "kindnet-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:10:54.108438 1703230 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:10:54.136250 1703230 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:10:54.136505 1703230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:10:54.194464 1703230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-09-08 14:10:54.183491161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:10:54.194585 1703230 docker.go:318] overlay module found
	I0908 14:10:54.197382 1703230 out.go:179] * Using the docker driver based on user configuration
	I0908 14:10:54.198798 1703230 start.go:304] selected driver: docker
	I0908 14:10:54.198823 1703230 start.go:918] validating driver "docker" against <nil>
	I0908 14:10:54.198844 1703230 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:10:54.199816 1703230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:10:54.262224 1703230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-09-08 14:10:54.25088525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:10:54.262445 1703230 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:10:54.262652 1703230 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:10:54.264547 1703230 out.go:179] * Using Docker driver with root privileges
	I0908 14:10:54.265977 1703230 cni.go:84] Creating CNI manager for "calico"
	I0908 14:10:54.266026 1703230 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0908 14:10:54.266117 1703230 start.go:348] cluster config:
	{Name:calico-964891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-964891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:10:54.267732 1703230 out.go:179] * Starting "calico-964891" primary control-plane node in "calico-964891" cluster
	I0908 14:10:54.269227 1703230 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 14:10:54.270713 1703230 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 14:10:54.272227 1703230 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:10:54.272278 1703230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0908 14:10:54.272291 1703230 cache.go:58] Caching tarball of preloaded images
	I0908 14:10:54.272368 1703230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 14:10:54.272404 1703230 preload.go:172] Found /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0908 14:10:54.272417 1703230 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 14:10:54.272538 1703230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/config.json ...
	I0908 14:10:54.272562 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/config.json: {Name:mkce996b7412e31b73b4134f304a8d315e8ed3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:10:54.298128 1703230 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 14:10:54.298150 1703230 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 14:10:54.298165 1703230 cache.go:232] Successfully downloaded all kic artifacts
	I0908 14:10:54.298192 1703230 start.go:360] acquireMachinesLock for calico-964891: {Name:mk36f6321c69a0f6de41035abcda17b1b4f48bb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:10:54.298289 1703230 start.go:364] duration metric: took 81.414µs to acquireMachinesLock for "calico-964891"
	I0908 14:10:54.298315 1703230 start.go:93] Provisioning new machine with config: &{Name:calico-964891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-964891 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:10:54.298380 1703230 start.go:125] createHost starting for "" (driver="docker")
	I0908 14:10:54.300318 1703230 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 14:10:54.300524 1703230 start.go:159] libmachine.API.Create for "calico-964891" (driver="docker")
	I0908 14:10:54.300550 1703230 client.go:168] LocalClient.Create starting
	I0908 14:10:54.300620 1703230 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem
	I0908 14:10:54.300647 1703230 main.go:141] libmachine: Decoding PEM data...
	I0908 14:10:54.300663 1703230 main.go:141] libmachine: Parsing certificate...
	I0908 14:10:54.300713 1703230 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/cert.pem
	I0908 14:10:54.300731 1703230 main.go:141] libmachine: Decoding PEM data...
	I0908 14:10:54.300739 1703230 main.go:141] libmachine: Parsing certificate...
	I0908 14:10:54.301012 1703230 cli_runner.go:164] Run: docker network inspect calico-964891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 14:10:54.319047 1703230 cli_runner.go:211] docker network inspect calico-964891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 14:10:54.319131 1703230 network_create.go:284] running [docker network inspect calico-964891] to gather additional debugging logs...
	I0908 14:10:54.319150 1703230 cli_runner.go:164] Run: docker network inspect calico-964891
	W0908 14:10:54.338874 1703230 cli_runner.go:211] docker network inspect calico-964891 returned with exit code 1
	I0908 14:10:54.338912 1703230 network_create.go:287] error running [docker network inspect calico-964891]: docker network inspect calico-964891: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-964891 not found
	I0908 14:10:54.338925 1703230 network_create.go:289] output of [docker network inspect calico-964891]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-964891 not found
	
	** /stderr **
	I0908 14:10:54.339072 1703230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:10:54.359808 1703230 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64b2234f707e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:9a:1c:9c:54:4c} reservation:<nil>}
	I0908 14:10:54.360568 1703230 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cb8965ff37a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:a1:ba:52:19:a4} reservation:<nil>}
	I0908 14:10:54.361110 1703230 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1736be2dcb9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:3b:40:07:6b:fb} reservation:<nil>}
	I0908 14:10:54.361989 1703230 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2b028d964bc0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:5d:04:00:2b:5f} reservation:<nil>}
	I0908 14:10:54.362838 1703230 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00060bec0}
	I0908 14:10:54.362863 1703230 network_create.go:124] attempt to create docker network calico-964891 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0908 14:10:54.362920 1703230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-964891 calico-964891
	I0908 14:10:54.423086 1703230 network_create.go:108] docker network calico-964891 192.168.85.0/24 created
	I0908 14:10:54.423125 1703230 kic.go:121] calculated static IP "192.168.85.2" for the "calico-964891" container
	I0908 14:10:54.423189 1703230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 14:10:54.442008 1703230 cli_runner.go:164] Run: docker volume create calico-964891 --label name.minikube.sigs.k8s.io=calico-964891 --label created_by.minikube.sigs.k8s.io=true
	I0908 14:10:54.461921 1703230 oci.go:103] Successfully created a docker volume calico-964891
	I0908 14:10:54.462022 1703230 cli_runner.go:164] Run: docker run --rm --name calico-964891-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-964891 --entrypoint /usr/bin/test -v calico-964891:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 14:10:54.944392 1703230 oci.go:107] Successfully prepared a docker volume calico-964891
	I0908 14:10:54.944422 1703230 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:10:54.944444 1703230 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 14:10:54.944505 1703230 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-964891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 14:10:59.892608 1703230 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-964891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.948039127s)
	I0908 14:10:59.892649 1703230 kic.go:203] duration metric: took 4.948200736s to extract preloaded images to volume ...
	W0908 14:10:59.892801 1703230 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 14:10:59.892933 1703230 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 14:10:59.946484 1703230 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-964891 --name calico-964891 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-964891 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-964891 --network calico-964891 --ip 192.168.85.2 --volume calico-964891:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 14:11:00.269875 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Running}}
	I0908 14:11:00.297407 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:00.320177 1703230 cli_runner.go:164] Run: docker exec calico-964891 stat /var/lib/dpkg/alternatives/iptables
	I0908 14:11:00.369706 1703230 oci.go:144] the created container "calico-964891" has a running status.
	I0908 14:11:00.369747 1703230 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa...
	I0908 14:11:00.601128 1703230 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 14:11:00.624347 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:00.657455 1703230 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 14:11:00.657483 1703230 kic_runner.go:114] Args: [docker exec --privileged calico-964891 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 14:11:00.748133 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:00.787954 1703230 machine.go:93] provisionDockerMachine start ...
	I0908 14:11:00.788067 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:00.811059 1703230 main.go:141] libmachine: Using SSH client type: native
	I0908 14:11:00.811479 1703230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I0908 14:11:00.811494 1703230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:11:01.043270 1703230 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-964891
	
	I0908 14:11:01.043300 1703230 ubuntu.go:182] provisioning hostname "calico-964891"
	I0908 14:11:01.043372 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.074912 1703230 main.go:141] libmachine: Using SSH client type: native
	I0908 14:11:01.075171 1703230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I0908 14:11:01.075193 1703230 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-964891 && echo "calico-964891" | sudo tee /etc/hostname
	I0908 14:11:01.220232 1703230 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-964891
	
	I0908 14:11:01.220302 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.260536 1703230 main.go:141] libmachine: Using SSH client type: native
	I0908 14:11:01.260841 1703230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I0908 14:11:01.260877 1703230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-964891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-964891/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-964891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:11:01.390617 1703230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:11:01.390650 1703230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-1407098/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-1407098/.minikube}
	I0908 14:11:01.390689 1703230 ubuntu.go:190] setting up certificates
	I0908 14:11:01.390702 1703230 provision.go:84] configureAuth start
	I0908 14:11:01.390770 1703230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-964891
	I0908 14:11:01.409784 1703230 provision.go:143] copyHostCerts
	I0908 14:11:01.409863 1703230 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.pem, removing ...
	I0908 14:11:01.409872 1703230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.pem
	I0908 14:11:01.409942 1703230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.pem (1082 bytes)
	I0908 14:11:01.410102 1703230 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1407098/.minikube/cert.pem, removing ...
	I0908 14:11:01.410116 1703230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1407098/.minikube/cert.pem
	I0908 14:11:01.410143 1703230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-1407098/.minikube/cert.pem (1123 bytes)
	I0908 14:11:01.410214 1703230 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-1407098/.minikube/key.pem, removing ...
	I0908 14:11:01.410221 1703230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-1407098/.minikube/key.pem
	I0908 14:11:01.410242 1703230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-1407098/.minikube/key.pem (1675 bytes)
	I0908 14:11:01.410308 1703230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca-key.pem org=jenkins.calico-964891 san=[127.0.0.1 192.168.85.2 calico-964891 localhost minikube]
	I0908 14:11:01.472339 1703230 provision.go:177] copyRemoteCerts
	I0908 14:11:01.472430 1703230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:11:01.472484 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.492033 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:01.583808 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:11:01.609443 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 14:11:01.635918 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:11:01.673350 1703230 provision.go:87] duration metric: took 282.630547ms to configureAuth
	I0908 14:11:01.673392 1703230 ubuntu.go:206] setting minikube options for container-runtime
	I0908 14:11:01.673592 1703230 config.go:182] Loaded profile config "calico-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:11:01.673607 1703230 machine.go:96] duration metric: took 885.629088ms to provisionDockerMachine
	I0908 14:11:01.673615 1703230 client.go:171] duration metric: took 7.373059215s to LocalClient.Create
	I0908 14:11:01.673641 1703230 start.go:167] duration metric: took 7.373116193s to libmachine.API.Create "calico-964891"
	I0908 14:11:01.673655 1703230 start.go:293] postStartSetup for "calico-964891" (driver="docker")
	I0908 14:11:01.673670 1703230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:11:01.673723 1703230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:11:01.673777 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.693548 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:01.783927 1703230 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:11:01.787835 1703230 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 14:11:01.787868 1703230 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 14:11:01.787876 1703230 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 14:11:01.787885 1703230 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 14:11:01.787898 1703230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1407098/.minikube/addons for local assets ...
	I0908 14:11:01.787997 1703230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-1407098/.minikube/files for local assets ...
	I0908 14:11:01.788077 1703230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-1407098/.minikube/files/etc/ssl/certs/14107722.pem -> 14107722.pem in /etc/ssl/certs
	I0908 14:11:01.788161 1703230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:11:01.797647 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/files/etc/ssl/certs/14107722.pem --> /etc/ssl/certs/14107722.pem (1708 bytes)
	I0908 14:11:01.824381 1703230 start.go:296] duration metric: took 150.7077ms for postStartSetup
	I0908 14:11:01.824734 1703230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-964891
	I0908 14:11:01.855563 1703230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/config.json ...
	I0908 14:11:01.855838 1703230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:11:01.855884 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.879343 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:01.967242 1703230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 14:11:01.972108 1703230 start.go:128] duration metric: took 7.673711332s to createHost
	I0908 14:11:01.972141 1703230 start.go:83] releasing machines lock for "calico-964891", held for 7.673841058s
	I0908 14:11:01.972213 1703230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-964891
	I0908 14:11:01.990219 1703230 ssh_runner.go:195] Run: cat /version.json
	I0908 14:11:01.990279 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:01.990279 1703230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:11:01.990348 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:02.009858 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:02.010630 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:02.097612 1703230 ssh_runner.go:195] Run: systemctl --version
	I0908 14:11:02.186648 1703230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 14:11:02.191692 1703230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 14:11:02.219733 1703230 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 14:11:02.219825 1703230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:11:02.262500 1703230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 14:11:02.262526 1703230 start.go:495] detecting cgroup driver to use...
	I0908 14:11:02.262558 1703230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 14:11:02.262596 1703230 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 14:11:02.276206 1703230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 14:11:02.288440 1703230 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:11:02.288496 1703230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:11:02.302209 1703230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:11:02.317116 1703230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:11:02.403449 1703230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:11:02.511654 1703230 docker.go:234] disabling docker service ...
	I0908 14:11:02.511720 1703230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:11:02.547817 1703230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:11:02.566711 1703230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:11:02.668399 1703230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:11:02.784908 1703230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:11:02.798604 1703230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:11:02.816473 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 14:11:02.828006 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 14:11:02.853340 1703230 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 14:11:02.853423 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 14:11:02.868366 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:11:02.880454 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 14:11:02.892177 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:11:02.902512 1703230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:11:02.912099 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 14:11:02.923235 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 14:11:02.936590 1703230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 14:11:02.952078 1703230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:11:02.967487 1703230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:11:02.977576 1703230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:11:03.084464 1703230 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 14:11:03.227670 1703230 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 14:11:03.227750 1703230 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 14:11:03.234526 1703230 start.go:563] Will wait 60s for crictl version
	I0908 14:11:03.234588 1703230 ssh_runner.go:195] Run: which crictl
	I0908 14:11:03.240504 1703230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:11:03.286167 1703230 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 14:11:03.286226 1703230 ssh_runner.go:195] Run: containerd --version
	I0908 14:11:03.312683 1703230 ssh_runner.go:195] Run: containerd --version
	I0908 14:11:03.345199 1703230 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 14:11:03.346354 1703230 cli_runner.go:164] Run: docker network inspect calico-964891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:11:03.369637 1703230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 14:11:03.374776 1703230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:11:03.387572 1703230 kubeadm.go:875] updating cluster {Name:calico-964891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-964891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:11:03.387696 1703230 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:11:03.387762 1703230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:11:03.427347 1703230 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:11:03.427370 1703230 containerd.go:534] Images already preloaded, skipping extraction
	I0908 14:11:03.427421 1703230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:11:03.485525 1703230 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:11:03.485553 1703230 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:11:03.485562 1703230 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0908 14:11:03.485677 1703230 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-964891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-964891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0908 14:11:03.485743 1703230 ssh_runner.go:195] Run: sudo crictl info
	I0908 14:11:03.530021 1703230 cni.go:84] Creating CNI manager for "calico"
	I0908 14:11:03.530062 1703230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:11:03.530120 1703230 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-964891 NodeName:calico-964891 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:11:03.530292 1703230 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-964891"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:11:03.530372 1703230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:11:03.544878 1703230 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:11:03.544952 1703230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:11:03.558834 1703230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0908 14:11:03.582045 1703230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:11:03.602292 1703230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0908 14:11:03.622909 1703230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 14:11:03.626689 1703230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:11:03.648149 1703230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:11:03.736887 1703230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:11:03.759249 1703230 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891 for IP: 192.168.85.2
	I0908 14:11:03.759281 1703230 certs.go:194] generating shared ca certs ...
	I0908 14:11:03.759303 1703230 certs.go:226] acquiring lock for ca certs: {Name:mk3d5c24da5741a2873f371ed7b0338124adca3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:03.759524 1703230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.key
	I0908 14:11:03.759581 1703230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/proxy-client-ca.key
	I0908 14:11:03.759591 1703230 certs.go:256] generating profile certs ...
	I0908 14:11:03.759710 1703230 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.key
	I0908 14:11:03.759732 1703230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.crt with IP's: []
	I0908 14:11:04.817009 1703230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.crt ...
	I0908 14:11:04.817048 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.crt: {Name:mk8f7c199623998c5a81c623e18a91ed1bdc434e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:04.817262 1703230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.key ...
	I0908 14:11:04.817336 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/client.key: {Name:mkf2c8ce83441d5f174df48f65cfd8bfb3c9fdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:04.817470 1703230 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key.a584ed45
	I0908 14:11:04.817492 1703230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt.a584ed45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0908 14:11:05.464236 1703230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt.a584ed45 ...
	I0908 14:11:05.464284 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt.a584ed45: {Name:mkbe4ccf407f88291e86a2d363f69724c79fe370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:05.464503 1703230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key.a584ed45 ...
	I0908 14:11:05.464521 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key.a584ed45: {Name:mk13d8f096684ebb01e42b8131174cab89e47073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:05.464612 1703230 certs.go:381] copying /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt.a584ed45 -> /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt
	I0908 14:11:05.464703 1703230 certs.go:385] copying /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key.a584ed45 -> /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key
	I0908 14:11:05.464772 1703230 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.key
	I0908 14:11:05.464791 1703230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.crt with IP's: []
	I0908 14:11:05.698698 1703230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.crt ...
	I0908 14:11:05.698732 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.crt: {Name:mk6c43d03bcbf1bba7b7b4f425a434b9ff39c996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:05.698944 1703230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.key ...
	I0908 14:11:05.698965 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.key: {Name:mk34c93cee3414e96e75abcb9c97f9ccd980b4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:05.699154 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/1410772.pem (1338 bytes)
	W0908 14:11:05.699195 1703230 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/1410772_empty.pem, impossibly tiny 0 bytes
	I0908 14:11:05.699203 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:11:05.699224 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:11:05.699245 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:11:05.699264 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/key.pem (1675 bytes)
	I0908 14:11:05.699303 1703230 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-1407098/.minikube/files/etc/ssl/certs/14107722.pem (1708 bytes)
	I0908 14:11:05.699966 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:11:05.727695 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:11:05.754070 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:11:05.780404 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 14:11:05.806861 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 14:11:05.839700 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:11:05.872345 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:11:05.899348 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/calico-964891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:11:05.925568 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:11:05.951330 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/certs/1410772.pem --> /usr/share/ca-certificates/1410772.pem (1338 bytes)
	I0908 14:11:05.979331 1703230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-1407098/.minikube/files/etc/ssl/certs/14107722.pem --> /usr/share/ca-certificates/14107722.pem (1708 bytes)
	I0908 14:11:06.009674 1703230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:11:06.032166 1703230 ssh_runner.go:195] Run: openssl version
	I0908 14:11:06.038238 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:11:06.048422 1703230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:11:06.052377 1703230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:11:06.052459 1703230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:11:06.060012 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:11:06.072904 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1410772.pem && ln -fs /usr/share/ca-certificates/1410772.pem /etc/ssl/certs/1410772.pem"
	I0908 14:11:06.083325 1703230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1410772.pem
	I0908 14:11:06.087410 1703230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:40 /usr/share/ca-certificates/1410772.pem
	I0908 14:11:06.087477 1703230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1410772.pem
	I0908 14:11:06.095317 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1410772.pem /etc/ssl/certs/51391683.0"
	I0908 14:11:06.106400 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14107722.pem && ln -fs /usr/share/ca-certificates/14107722.pem /etc/ssl/certs/14107722.pem"
	I0908 14:11:06.117546 1703230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14107722.pem
	I0908 14:11:06.121593 1703230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:40 /usr/share/ca-certificates/14107722.pem
	I0908 14:11:06.121653 1703230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14107722.pem
	I0908 14:11:06.128974 1703230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14107722.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:11:06.139482 1703230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:11:06.143372 1703230 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:11:06.143443 1703230 kubeadm.go:392] StartCluster: {Name:calico-964891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-964891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:11:06.143547 1703230 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 14:11:06.143612 1703230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:11:06.181754 1703230 cri.go:89] found id: ""
	I0908 14:11:06.181835 1703230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:11:06.191684 1703230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:11:06.202057 1703230 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 14:11:06.202125 1703230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:11:06.212083 1703230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:11:06.212108 1703230 kubeadm.go:157] found existing configuration files:
	
	I0908 14:11:06.212153 1703230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:11:06.221876 1703230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:11:06.221934 1703230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:11:06.231117 1703230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:11:06.240559 1703230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:11:06.240614 1703230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:11:06.250044 1703230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:11:06.259736 1703230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:11:06.259798 1703230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:11:06.270067 1703230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:11:06.279070 1703230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:11:06.279139 1703230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:11:06.289774 1703230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 14:11:06.332833 1703230 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 14:11:06.332908 1703230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:11:06.350220 1703230 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 14:11:06.350321 1703230 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 14:11:06.350368 1703230 kubeadm.go:310] OS: Linux
	I0908 14:11:06.350423 1703230 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 14:11:06.350481 1703230 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 14:11:06.350542 1703230 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 14:11:06.350604 1703230 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 14:11:06.350667 1703230 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 14:11:06.350730 1703230 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 14:11:06.350787 1703230 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 14:11:06.350849 1703230 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 14:11:06.350908 1703230 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 14:11:06.411922 1703230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:11:06.412031 1703230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:11:06.412157 1703230 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 14:11:06.417962 1703230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:11:06.420534 1703230 out.go:252]   - Generating certificates and keys ...
	I0908 14:11:06.420656 1703230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:11:06.420764 1703230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:11:06.521112 1703230 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:11:06.566101 1703230 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:11:06.682311 1703230 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:11:06.831926 1703230 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:11:07.065149 1703230 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:11:07.065317 1703230 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-964891 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0908 14:11:07.419197 1703230 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:11:07.419378 1703230 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-964891 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0908 14:11:07.541615 1703230 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:11:07.771652 1703230 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:11:07.857951 1703230 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:11:07.858154 1703230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:11:08.017722 1703230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:11:08.219537 1703230 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 14:11:08.497177 1703230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:11:08.606883 1703230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:11:08.736068 1703230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:11:08.736761 1703230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:11:08.739845 1703230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:11:08.743690 1703230 out.go:252]   - Booting up control plane ...
	I0908 14:11:08.743839 1703230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:11:08.743958 1703230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:11:08.744974 1703230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:11:08.758497 1703230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:11:08.758649 1703230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 14:11:08.767615 1703230 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 14:11:08.767876 1703230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:11:08.767934 1703230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:11:08.868787 1703230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 14:11:08.869004 1703230 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 14:11:09.375330 1703230 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.220501ms
	I0908 14:11:09.375480 1703230 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 14:11:09.375588 1703230 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0908 14:11:09.375717 1703230 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 14:11:09.375828 1703230 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 14:11:13.747072 1703230 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.371995817s
	I0908 14:11:14.634809 1703230 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.259754435s
	I0908 14:11:16.376333 1703230 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.00124846s
	I0908 14:11:16.389713 1703230 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:11:16.403613 1703230 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:11:16.414406 1703230 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:11:16.414715 1703230 kubeadm.go:310] [mark-control-plane] Marking the node calico-964891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:11:16.431108 1703230 kubeadm.go:310] [bootstrap-token] Using token: lvy136.9htd51yq34o8hh41
	I0908 14:11:16.432792 1703230 out.go:252]   - Configuring RBAC rules ...
	I0908 14:11:16.432977 1703230 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:11:16.438886 1703230 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:11:16.446123 1703230 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:11:16.448880 1703230 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:11:16.452224 1703230 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:11:16.457559 1703230 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:11:16.784006 1703230 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:11:17.231763 1703230 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:11:17.783835 1703230 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:11:17.785237 1703230 kubeadm.go:310] 
	I0908 14:11:17.785394 1703230 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:11:17.785444 1703230 kubeadm.go:310] 
	I0908 14:11:17.785581 1703230 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:11:17.785598 1703230 kubeadm.go:310] 
	I0908 14:11:17.785632 1703230 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:11:17.785805 1703230 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:11:17.785911 1703230 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:11:17.785938 1703230 kubeadm.go:310] 
	I0908 14:11:17.786083 1703230 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:11:17.786103 1703230 kubeadm.go:310] 
	I0908 14:11:17.786175 1703230 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:11:17.786188 1703230 kubeadm.go:310] 
	I0908 14:11:17.786257 1703230 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:11:17.786342 1703230 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:11:17.786409 1703230 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:11:17.786419 1703230 kubeadm.go:310] 
	I0908 14:11:17.786556 1703230 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:11:17.786700 1703230 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:11:17.786718 1703230 kubeadm.go:310] 
	I0908 14:11:17.786843 1703230 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lvy136.9htd51yq34o8hh41 \
	I0908 14:11:17.786991 1703230 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:addabc01629643b4beff67cf8864f75adfdd0d01e6a95285606b52b3bdc29cb9 \
	I0908 14:11:17.787042 1703230 kubeadm.go:310] 	--control-plane 
	I0908 14:11:17.787051 1703230 kubeadm.go:310] 
	I0908 14:11:17.787181 1703230 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:11:17.787199 1703230 kubeadm.go:310] 
	I0908 14:11:17.787329 1703230 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lvy136.9htd51yq34o8hh41 \
	I0908 14:11:17.787558 1703230 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:addabc01629643b4beff67cf8864f75adfdd0d01e6a95285606b52b3bdc29cb9 
	I0908 14:11:17.791486 1703230 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 14:11:17.791800 1703230 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 14:11:17.791958 1703230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:11:17.791998 1703230 cni.go:84] Creating CNI manager for "calico"
	I0908 14:11:17.794116 1703230 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0908 14:11:17.796424 1703230 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 14:11:17.796455 1703230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0908 14:11:17.820155 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 14:11:19.570953 1703230 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.750753559s)
	I0908 14:11:19.571010 1703230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:11:19.571118 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:19.571177 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-964891 minikube.k8s.io/updated_at=2025_09_08T14_11_19_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=calico-964891 minikube.k8s.io/primary=true
	I0908 14:11:19.582894 1703230 ops.go:34] apiserver oom_adj: -16
	I0908 14:11:19.684024 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:20.184179 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:20.684451 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:21.184162 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:21.685151 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:22.184885 1703230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:11:22.272732 1703230 kubeadm.go:1105] duration metric: took 2.701683535s to wait for elevateKubeSystemPrivileges
	I0908 14:11:22.272789 1703230 kubeadm.go:394] duration metric: took 16.129346382s to StartCluster
	I0908 14:11:22.272816 1703230 settings.go:142] acquiring lock: {Name:mk328bcb8568502d3007e49191e73f2c52834759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:22.272907 1703230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 14:11:22.275096 1703230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/kubeconfig: {Name:mk0803a0a1720c6d1e7f7259281365272e4771e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:11:22.275404 1703230 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:11:22.275587 1703230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:11:22.275622 1703230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:11:22.275738 1703230 addons.go:69] Setting storage-provisioner=true in profile "calico-964891"
	I0908 14:11:22.275763 1703230 addons.go:238] Setting addon storage-provisioner=true in "calico-964891"
	I0908 14:11:22.275775 1703230 addons.go:69] Setting default-storageclass=true in profile "calico-964891"
	I0908 14:11:22.275802 1703230 host.go:66] Checking if "calico-964891" exists ...
	I0908 14:11:22.275822 1703230 config.go:182] Loaded profile config "calico-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:11:22.275802 1703230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-964891"
	I0908 14:11:22.276319 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:22.276491 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:22.281765 1703230 out.go:179] * Verifying Kubernetes components...
	I0908 14:11:22.284266 1703230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:11:22.306432 1703230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:11:22.306811 1703230 addons.go:238] Setting addon default-storageclass=true in "calico-964891"
	I0908 14:11:22.306861 1703230 host.go:66] Checking if "calico-964891" exists ...
	I0908 14:11:22.307216 1703230 cli_runner.go:164] Run: docker container inspect calico-964891 --format={{.State.Status}}
	I0908 14:11:22.308048 1703230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:11:22.308069 1703230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:11:22.308128 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:22.327191 1703230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:11:22.327215 1703230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:11:22.327270 1703230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-964891
	I0908 14:11:22.329355 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:22.363797 1703230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/calico-964891/id_rsa Username:docker}
	I0908 14:11:22.662303 1703230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:11:22.739690 1703230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:11:22.856448 1703230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:11:22.856583 1703230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:11:23.774932 1703230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.112582982s)
	I0908 14:11:23.775000 1703230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035263635s)
	I0908 14:11:23.775098 1703230 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0908 14:11:23.776478 1703230 node_ready.go:35] waiting up to 15m0s for node "calico-964891" to be "Ready" ...
	I0908 14:11:23.786497 1703230 node_ready.go:49] node "calico-964891" is "Ready"
	I0908 14:11:23.786527 1703230 node_ready.go:38] duration metric: took 10.023534ms for node "calico-964891" to be "Ready" ...
	I0908 14:11:23.786545 1703230 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:11:23.786601 1703230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:11:23.792084 1703230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 14:11:23.793837 1703230 addons.go:514] duration metric: took 1.518221883s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 14:11:23.802878 1703230 api_server.go:72] duration metric: took 1.527431312s to wait for apiserver process to appear ...
	I0908 14:11:23.802974 1703230 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:11:23.803009 1703230 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0908 14:11:23.810768 1703230 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0908 14:11:23.811732 1703230 api_server.go:141] control plane version: v1.34.0
	I0908 14:11:23.811803 1703230 api_server.go:131] duration metric: took 8.809636ms to wait for apiserver health ...
	I0908 14:11:23.811818 1703230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:11:23.831968 1703230 system_pods.go:59] 10 kube-system pods found
	I0908 14:11:23.832022 1703230 system_pods.go:61] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:23.832036 1703230 system_pods.go:61] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:23.832048 1703230 system_pods.go:61] "coredns-66bc5c9577-ks26c" [e7ad4731-7212-4651-a025-a4bac687472a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:23.832060 1703230 system_pods.go:61] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:23.832071 1703230 system_pods.go:61] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:23.832080 1703230 system_pods.go:61] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:23.832091 1703230 system_pods.go:61] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:23.832103 1703230 system_pods.go:61] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:11:23.832118 1703230 system_pods.go:61] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:11:23.832154 1703230 system_pods.go:61] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:11:23.832174 1703230 system_pods.go:74] duration metric: took 20.347469ms to wait for pod list to return data ...
	I0908 14:11:23.832189 1703230 default_sa.go:34] waiting for default service account to be created ...
	I0908 14:11:23.836067 1703230 default_sa.go:45] found service account: "default"
	I0908 14:11:23.836103 1703230 default_sa.go:55] duration metric: took 3.905454ms for default service account to be created ...
	I0908 14:11:23.836116 1703230 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 14:11:23.921650 1703230 system_pods.go:86] 10 kube-system pods found
	I0908 14:11:23.921691 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:23.921706 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:23.921716 1703230 system_pods.go:89] "coredns-66bc5c9577-ks26c" [e7ad4731-7212-4651-a025-a4bac687472a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:23.921737 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:23.921750 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:23.921759 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:23.921770 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:23.921778 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:11:23.921789 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:11:23.921799 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:11:23.921840 1703230 retry.go:31] will retry after 272.866638ms: missing components: kube-dns, kube-proxy
	I0908 14:11:24.201193 1703230 system_pods.go:86] 10 kube-system pods found
	I0908 14:11:24.201250 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:24.201264 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:24.201370 1703230 system_pods.go:89] "coredns-66bc5c9577-ks26c" [e7ad4731-7212-4651-a025-a4bac687472a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.201381 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.201389 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:24.201398 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:24.201410 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:24.201418 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 14:11:24.201426 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:11:24.201446 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:11:24.201470 1703230 retry.go:31] will retry after 275.729629ms: missing components: kube-dns, kube-proxy
	I0908 14:11:24.294937 1703230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-964891" context rescaled to 1 replicas
	I0908 14:11:24.481994 1703230 system_pods.go:86] 10 kube-system pods found
	I0908 14:11:24.482038 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:24.482051 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:24.482081 1703230 system_pods.go:89] "coredns-66bc5c9577-ks26c" [e7ad4731-7212-4651-a025-a4bac687472a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.482091 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.482099 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:24.482108 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:24.482116 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:24.482123 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:24.482130 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:11:24.482138 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:11:24.482159 1703230 retry.go:31] will retry after 395.735794ms: missing components: kube-dns
	I0908 14:11:24.882738 1703230 system_pods.go:86] 10 kube-system pods found
	I0908 14:11:24.882780 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:24.882793 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:24.882803 1703230 system_pods.go:89] "coredns-66bc5c9577-ks26c" [e7ad4731-7212-4651-a025-a4bac687472a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.882812 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:24.882821 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:24.882830 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:24.882839 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:24.882851 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:24.882857 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:24.882865 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:11:24.882887 1703230 retry.go:31] will retry after 403.671964ms: missing components: kube-dns
	I0908 14:11:25.291818 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:25.291861 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:25.291873 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:25.291883 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:25.291892 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:25.291902 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:25.291910 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:25.291920 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:25.291926 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:25.291932 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:25.291954 1703230 retry.go:31] will retry after 639.148141ms: missing components: kube-dns
	I0908 14:11:25.936492 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:25.936534 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:25.936557 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:25.936568 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:25.936578 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:25.936587 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:25.936596 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:25.936605 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:25.936611 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:25.936619 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:25.936638 1703230 retry.go:31] will retry after 903.567262ms: missing components: kube-dns
	I0908 14:11:26.845526 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:26.845560 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:26.845569 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:26.845575 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:26.845581 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:26.845590 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:26.845600 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:26.845606 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:26.845613 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:26.845620 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:26.845640 1703230 retry.go:31] will retry after 805.697842ms: missing components: kube-dns
	I0908 14:11:27.656941 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:27.656989 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:27.657003 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:27.657012 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:27.657020 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:27.657029 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:27.657039 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:11:27.657049 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:27.657055 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:27.657067 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:27.657090 1703230 retry.go:31] will retry after 1.293269503s: missing components: kube-dns
	I0908 14:11:28.955567 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:28.955609 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:28.955620 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:28.955627 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:28.955634 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:28.955641 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:11:28.955645 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:28.955654 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:28.955660 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:28.955664 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:28.955682 1703230 retry.go:31] will retry after 1.399383204s: missing components: kube-dns
	I0908 14:11:30.395050 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:30.395100 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:30.395121 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:30.395131 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:30.395145 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:11:30.395153 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:30.395158 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:30.395166 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:30.395176 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:30.395180 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:30.395201 1703230 retry.go:31] will retry after 1.667531169s: missing components: kube-dns
	I0908 14:11:32.067563 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:32.067610 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:32.067621 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:32.067628 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:32.067633 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:32.067638 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:32.067642 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:32.067646 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:32.067650 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:32.067653 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:32.067672 1703230 retry.go:31] will retry after 2.666837589s: missing components: kube-dns
	I0908 14:11:34.739913 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:34.739957 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:34.739972 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:34.739982 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:34.739988 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:34.739995 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:34.740002 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:34.740011 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:34.740016 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:34.740023 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:34.740041 1703230 retry.go:31] will retry after 2.783977243s: missing components: kube-dns
	I0908 14:11:37.530744 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:37.530791 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:37.530804 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:37.530814 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:37.530819 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:37.530827 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:37.530834 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:37.530839 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:37.530845 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:37.530850 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:37.530873 1703230 retry.go:31] will retry after 3.887648194s: missing components: kube-dns
	I0908 14:11:41.424588 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:41.424624 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:41.424636 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:41.424647 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:41.424653 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:41.424661 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:41.424668 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:41.424676 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:41.424682 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:41.424690 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:41.424710 1703230 retry.go:31] will retry after 3.428085756s: missing components: kube-dns
	I0908 14:11:44.857405 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:44.857447 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:44.857460 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:44.857469 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:44.857473 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:44.857478 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:44.857482 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:44.857486 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:44.857491 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:44.857493 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:44.857508 1703230 retry.go:31] will retry after 5.716286039s: missing components: kube-dns
	I0908 14:11:50.579870 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:50.579920 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:50.579937 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:50.579949 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:50.579954 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:50.579961 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:50.579967 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:50.579974 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:50.579980 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:50.579986 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:50.580009 1703230 retry.go:31] will retry after 7.625479822s: missing components: kube-dns
	I0908 14:11:58.214347 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:58.214386 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:58.214394 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:58.214402 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:58.214406 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:11:58.214411 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:11:58.214414 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:11:58.214417 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:11:58.214421 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:11:58.214424 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:11:58.214440 1703230 retry.go:31] will retry after 9.030146992s: missing components: kube-dns
	I0908 14:12:07.252125 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:12:07.252168 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:12:07.252183 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:12:07.252196 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:12:07.252202 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:12:07.252209 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:12:07.252215 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:12:07.252220 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:12:07.252225 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:12:07.252229 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:12:07.252255 1703230 retry.go:31] will retry after 11.593020741s: missing components: kube-dns
	I0908 14:12:18.854326 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:12:18.854371 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:12:18.854385 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:12:18.854396 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:12:18.854406 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:12:18.854418 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:12:18.854425 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:12:18.854433 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:12:18.854439 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:12:18.854447 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:12:18.854468 1703230 retry.go:31] will retry after 17.096654819s: missing components: kube-dns
	I0908 14:12:35.956566 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:12:35.956606 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:12:35.956618 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:12:35.956627 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:12:35.956633 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:12:35.956639 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:12:35.956645 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:12:35.956651 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:12:35.956657 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:12:35.956662 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:12:35.956684 1703230 retry.go:31] will retry after 18.459051885s: missing components: kube-dns
	I0908 14:12:54.421798 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:12:54.421840 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:12:54.421852 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:12:54.421863 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:12:54.421871 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:12:54.421877 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:12:54.421884 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:12:54.421892 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:12:54.421897 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:12:54.421902 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:12:54.421923 1703230 retry.go:31] will retry after 24.146416901s: missing components: kube-dns
	I0908 14:13:18.574045 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:13:18.574088 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:13:18.574098 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:13:18.574108 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:13:18.574116 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:13:18.574124 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:13:18.574131 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:13:18.574136 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:13:18.574141 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:13:18.574146 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:13:18.574165 1703230 retry.go:31] will retry after 26.663050921s: missing components: kube-dns
	I0908 14:13:45.242073 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:13:45.242115 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:13:45.242126 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:13:45.242138 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:13:45.242142 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:13:45.242145 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:13:45.242149 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:13:45.242153 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:13:45.242158 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:13:45.242161 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:13:45.242177 1703230 retry.go:31] will retry after 29.938029343s: missing components: kube-dns
	I0908 14:14:15.185159 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:14:15.185200 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:14:15.185213 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:14:15.185222 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:14:15.185226 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:14:15.185230 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:14:15.185233 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:14:15.185238 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:14:15.185242 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:14:15.185246 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:14:15.185309 1703230 retry.go:31] will retry after 33.835258006s: missing components: kube-dns
	I0908 14:14:49.025305 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:14:49.025342 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:14:49.025351 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:14:49.025358 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:14:49.025363 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:14:49.025368 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:14:49.025371 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:14:49.025375 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:14:49.025378 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:14:49.025382 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:14:49.025398 1703230 retry.go:31] will retry after 55.906387015s: missing components: kube-dns
	I0908 14:15:44.936427 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:15:44.936472 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:15:44.936486 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:15:44.936497 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:15:44.936502 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:15:44.936508 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:15:44.936514 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:15:44.936519 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:15:44.936524 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:15:44.936530 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:15:44.936554 1703230 retry.go:31] will retry after 1m5.196018608s: missing components: kube-dns
	I0908 14:16:50.138117 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:16:50.138161 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:16:50.138170 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:16:50.138177 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:16:50.138181 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:16:50.138188 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:16:50.138192 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:16:50.138196 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:16:50.138199 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:16:50.138202 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:16:50.138222 1703230 retry.go:31] will retry after 47.367625925s: missing components: kube-dns
	I0908 14:17:37.511143 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:17:37.511182 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:17:37.511191 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:17:37.511198 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:17:37.511202 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:17:37.511208 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:17:37.511211 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:17:37.511215 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:17:37.511219 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:17:37.511222 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:17:37.511240 1703230 retry.go:31] will retry after 59.983919733s: missing components: kube-dns
	I0908 14:18:37.502211 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:18:37.502248 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:18:37.502277 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:18:37.502287 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:18:37.502292 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:18:37.502296 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:18:37.502306 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:18:37.502310 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:18:37.502313 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:18:37.502316 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:18:37.502333 1703230 retry.go:31] will retry after 1m2.133164656s: missing components: kube-dns
	I0908 14:19:39.640115 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:19:39.640160 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:19:39.640172 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:19:39.640181 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:19:39.640185 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:19:39.640189 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:19:39.640192 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:19:39.640197 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:19:39.640200 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:19:39.640203 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:19:39.640222 1703230 retry.go:31] will retry after 47.932802052s: missing components: kube-dns
	I0908 14:20:27.580602 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:20:27.580638 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:20:27.580647 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:20:27.580656 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:20:27.580660 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:20:27.580664 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:20:27.580668 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:20:27.580671 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:20:27.580674 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:20:27.580677 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:20:27.580698 1703230 retry.go:31] will retry after 1m4.280594703s: missing components: kube-dns
	I0908 14:21:31.867669 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:21:31.867727 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:21:31.867746 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:21:31.867758 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:21:31.867764 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:21:31.867770 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:21:31.867775 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:21:31.867787 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:21:31.867793 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:21:31.867800 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:21:31.867829 1703230 retry.go:31] will retry after 45.36659445s: missing components: kube-dns
	I0908 14:22:17.239788 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:22:17.239826 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:22:17.239842 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:22:17.239856 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:22:17.239861 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:22:17.239866 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:22:17.239870 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:22:17.239874 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:22:17.239878 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:22:17.239881 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:22:17.239902 1703230 retry.go:31] will retry after 45.89956064s: missing components: kube-dns
	I0908 14:23:03.143750 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:23:03.143787 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:23:03.143796 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:23:03.143852 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:23:03.143859 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:23:03.143864 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:23:03.143870 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:23:03.143874 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:23:03.143881 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:23:03.143884 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:23:03.143903 1703230 retry.go:31] will retry after 56.773563445s: missing components: kube-dns
	I0908 14:23:59.923200 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:23:59.923259 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:23:59.923272 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:23:59.923280 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:23:59.923284 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:23:59.923288 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:23:59.923293 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:23:59.923298 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:23:59.923301 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:23:59.923304 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:23:59.923327 1703230 retry.go:31] will retry after 1m7.003185413s: missing components: kube-dns
	I0908 14:25:06.933943 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:25:06.933991 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:25:06.934006 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:25:06.934017 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:25:06.934023 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:25:06.934029 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:25:06.934051 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:25:06.934055 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:25:06.934064 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:25:06.934070 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:25:06.934115 1703230 retry.go:31] will retry after 54.603018548s: missing components: kube-dns
	I0908 14:26:01.541855 1703230 system_pods.go:86] 9 kube-system pods found
	I0908 14:26:01.541896 1703230 system_pods.go:89] "calico-kube-controllers-59556d9b4c-ttc8s" [1ce9815a-ffe9-4237-968e-0b7c7bb65454] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:26:01.541908 1703230 system_pods.go:89] "calico-node-wnwnb" [fce7f98c-a9df-4274-911f-5d0e5cffd2e6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:26:01.541917 1703230 system_pods.go:89] "coredns-66bc5c9577-pxkw6" [beeab12d-c967-4047-82ba-f3808259bc56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:26:01.541921 1703230 system_pods.go:89] "etcd-calico-964891" [63663b87-62fe-43dd-aba1-35a858a486e3] Running
	I0908 14:26:01.541925 1703230 system_pods.go:89] "kube-apiserver-calico-964891" [a9aef1b9-24f9-4e74-99dc-5723cbbcb483] Running
	I0908 14:26:01.541928 1703230 system_pods.go:89] "kube-controller-manager-calico-964891" [1893e045-e50b-42b9-bf27-d003278c1800] Running
	I0908 14:26:01.541933 1703230 system_pods.go:89] "kube-proxy-s768d" [7ab62492-1c81-4fbd-bcd7-a3c2b3c1e217] Running
	I0908 14:26:01.541939 1703230 system_pods.go:89] "kube-scheduler-calico-964891" [7f861dbd-a6cb-43fc-9d69-2229fc0746f3] Running
	I0908 14:26:01.541943 1703230 system_pods.go:89] "storage-provisioner" [1bc11394-ee5d-43de-8ceb-68852ecac3ea] Running
	I0908 14:26:01.544721 1703230 out.go:203] 
	W0908 14:26:01.546396 1703230 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0908 14:26:01.546424 1703230 out.go:285] * 
	* 
	W0908 14:26:01.548118 1703230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0908 14:26:01.549896 1703230 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (907.56s)

                                                
                                    

Test pass (300/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.16
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 4.67
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.23
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 1.23
21 TestBinaryMirror 0.87
22 TestOffline 70.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 155.76
29 TestAddons/serial/Volcano 71.76
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.47
35 TestAddons/parallel/Registry 15.31
36 TestAddons/parallel/RegistryCreds 0.66
37 TestAddons/parallel/Ingress 19.35
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 5.75
41 TestAddons/parallel/CSI 63.32
42 TestAddons/parallel/Headlamp 32.56
43 TestAddons/parallel/CloudSpanner 5.51
44 TestAddons/parallel/LocalPath 53.77
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 11.77
47 TestAddons/parallel/AmdGpuDevicePlugin 6.49
48 TestAddons/StoppedEnableDisable 12.24
49 TestCertOptions 31.05
50 TestCertExpiration 213.42
52 TestForceSystemdFlag 33.45
53 TestForceSystemdEnv 31.8
54 TestDockerEnvContainerd 42.83
55 TestKVMDriverInstallOrUpdate 1.94
59 TestErrorSpam/setup 24.42
60 TestErrorSpam/start 0.67
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.57
63 TestErrorSpam/unpause 1.83
64 TestErrorSpam/stop 2.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 45.77
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 5.91
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
76 TestFunctional/serial/CacheCmd/cache/add_local 1.88
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 40.26
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.44
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.43
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 9.46
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 20.71
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 37.47
102 TestFunctional/parallel/SSHCmd 0.5
103 TestFunctional/parallel/CpCmd 1.84
104 TestFunctional/parallel/MySQL 23.09
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 1.58
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
114 TestFunctional/parallel/License 0.56
115 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
117 TestFunctional/parallel/ProfileCmd/profile_list 0.48
118 TestFunctional/parallel/MountCmd/any-port 7.78
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.55
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
126 TestFunctional/parallel/ImageCommands/ImageBuild 6.4
127 TestFunctional/parallel/ImageCommands/Setup 1.51
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
135 TestFunctional/parallel/ServiceCmd/List 0.57
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
138 TestFunctional/parallel/MountCmd/specific-port 1.95
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
141 TestFunctional/parallel/ServiceCmd/Format 0.44
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
143 TestFunctional/parallel/ServiceCmd/URL 0.46
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.21
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 101.73
164 TestMultiControlPlane/serial/DeployApp 5.34
165 TestMultiControlPlane/serial/PingHostFromPods 1.13
166 TestMultiControlPlane/serial/AddWorkerNode 12.32
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
169 TestMultiControlPlane/serial/CopyFile 16.39
170 TestMultiControlPlane/serial/StopSecondaryNode 12.65
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 10.62
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.88
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.34
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 35.95
178 TestMultiControlPlane/serial/RestartCluster 57.51
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 26.16
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
185 TestJSONOutput/start/Command 48.35
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.68
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.76
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 31.35
211 TestKicCustomNetwork/use_default_bridge_network 26.02
212 TestKicExistingNetwork 25.45
213 TestKicCustomSubnet 26.78
214 TestKicStaticIP 26.79
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 55.93
219 TestMountStart/serial/StartWithMountFirst 5.47
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 5.3
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 6.95
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 54.3
231 TestMultiNode/serial/DeployApp2Nodes 17.91
232 TestMultiNode/serial/PingHostFrom2Pods 0.77
233 TestMultiNode/serial/AddNode 14.95
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.25
237 TestMultiNode/serial/StopNode 2.14
238 TestMultiNode/serial/StartAfterStop 6.96
239 TestMultiNode/serial/RestartKeepsNodes 80.73
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 23.87
242 TestMultiNode/serial/RestartMultiNode 53.01
243 TestMultiNode/serial/ValidateNameConflict 25.45
248 TestPreload 141.72
250 TestScheduledStopUnix 100.79
253 TestInsufficientStorage 10
254 TestRunningBinaryUpgrade 79.43
256 TestKubernetesUpgrade 158.99
257 TestMissingContainerUpgrade 95.12
259 TestStoppedBinaryUpgrade/Setup 0.68
260 TestPause/serial/Start 66.23
261 TestStoppedBinaryUpgrade/Upgrade 77.98
262 TestPause/serial/SecondStartNoReconfiguration 6.13
270 TestPause/serial/Pause 1.01
271 TestPause/serial/VerifyStatus 0.42
272 TestPause/serial/Unpause 0.96
273 TestPause/serial/PauseAgain 1.05
274 TestPause/serial/DeletePaused 3.4
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.62
276 TestPause/serial/VerifyDeletedResources 0.74
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
279 TestNoKubernetes/serial/StartWithK8s 28.65
283 TestNoKubernetes/serial/StartWithStopK8s 25.81
288 TestNetworkPlugins/group/false 5.92
292 TestNoKubernetes/serial/Start 4.61
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
294 TestNoKubernetes/serial/ProfileList 33.06
295 TestNoKubernetes/serial/Stop 1.21
296 TestNoKubernetes/serial/StartNoArgs 6.17
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
299 TestStartStop/group/old-k8s-version/serial/FirstStart 67.17
301 TestStartStop/group/no-preload/serial/FirstStart 68.58
303 TestStartStop/group/embed-certs/serial/FirstStart 54.74
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.28
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
306 TestStartStop/group/old-k8s-version/serial/Stop 11.95
307 TestStartStop/group/no-preload/serial/DeployApp 8.27
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/old-k8s-version/serial/SecondStart 55.45
310 TestStartStop/group/embed-certs/serial/DeployApp 8.26
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
312 TestStartStop/group/no-preload/serial/Stop 12.07
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
314 TestStartStop/group/embed-certs/serial/Stop 12.08
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 50.99
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
318 TestStartStop/group/embed-certs/serial/SecondStart 51.66
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.49
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/old-k8s-version/serial/Pause 2.91
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/newest-cni/serial/FirstStart 29.79
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
331 TestStartStop/group/no-preload/serial/Pause 3.14
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
333 TestNetworkPlugins/group/auto/Start 48.59
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.48
336 TestNetworkPlugins/group/kindnet/Start 54.27
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
340 TestStartStop/group/newest-cni/serial/Stop 1.22
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
342 TestStartStop/group/newest-cni/serial/SecondStart 15.87
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
348 TestStartStop/group/newest-cni/serial/Pause 3.26
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.36
352 TestNetworkPlugins/group/auto/KubeletFlags 0.36
353 TestNetworkPlugins/group/auto/NetCatPod 8.24
354 TestNetworkPlugins/group/auto/DNS 0.22
355 TestNetworkPlugins/group/auto/Localhost 0.14
356 TestNetworkPlugins/group/auto/HairPin 0.12
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
359 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
360 TestNetworkPlugins/group/custom-flannel/Start 52.95
361 TestNetworkPlugins/group/kindnet/DNS 0.15
362 TestNetworkPlugins/group/kindnet/Localhost 0.13
363 TestNetworkPlugins/group/kindnet/HairPin 0.13
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.42
368 TestNetworkPlugins/group/enable-default-cni/Start 38.09
369 TestNetworkPlugins/group/flannel/Start 122.23
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
374 TestNetworkPlugins/group/custom-flannel/DNS 0.13
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
380 TestNetworkPlugins/group/bridge/Start 40.16
381 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
382 TestNetworkPlugins/group/bridge/NetCatPod 9.19
383 TestNetworkPlugins/group/bridge/DNS 0.12
384 TestNetworkPlugins/group/bridge/Localhost 0.1
385 TestNetworkPlugins/group/bridge/HairPin 0.12
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
388 TestNetworkPlugins/group/flannel/NetCatPod 9.18
389 TestNetworkPlugins/group/flannel/DNS 0.12
390 TestNetworkPlugins/group/flannel/Localhost 0.1
391 TestNetworkPlugins/group/flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-359566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-359566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.162205707s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:33:54.530982 1410772 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0908 13:33:54.531115 1410772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-359566
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-359566: exit status 85 (72.956217ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-359566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-359566 │ jenkins │ v1.36.0 │ 08 Sep 25 13:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:33:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:33:49.414951 1410784 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:33:49.415264 1410784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:33:49.415276 1410784 out.go:374] Setting ErrFile to fd 2...
	I0908 13:33:49.415280 1410784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:33:49.415518 1410784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	W0908 13:33:49.415651 1410784 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-1407098/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-1407098/.minikube/config/config.json: no such file or directory
	I0908 13:33:49.416325 1410784 out.go:368] Setting JSON to true
	I0908 13:33:49.417555 1410784 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11773,"bootTime":1757326656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:33:49.417679 1410784 start.go:140] virtualization: kvm guest
	I0908 13:33:49.420040 1410784 out.go:99] [download-only-359566] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 13:33:49.420224 1410784 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:33:49.420298 1410784 notify.go:220] Checking for updates...
	I0908 13:33:49.421856 1410784 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:33:49.423780 1410784 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:33:49.425660 1410784 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 13:33:49.427214 1410784 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 13:33:49.428619 1410784 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:33:49.431568 1410784 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:33:49.431948 1410784 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:33:49.456991 1410784 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:33:49.457155 1410784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:33:49.511577 1410784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 13:33:49.500973504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:33:49.511692 1410784 docker.go:318] overlay module found
	I0908 13:33:49.513792 1410784 out.go:99] Using the docker driver based on user configuration
	I0908 13:33:49.513838 1410784 start.go:304] selected driver: docker
	I0908 13:33:49.513845 1410784 start.go:918] validating driver "docker" against <nil>
	I0908 13:33:49.513936 1410784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:33:49.567448 1410784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 13:33:49.557623693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:33:49.567641 1410784 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:33:49.568177 1410784 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 13:33:49.568378 1410784 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:33:49.570409 1410784 out.go:171] Using Docker driver with root privileges
	I0908 13:33:49.571831 1410784 cni.go:84] Creating CNI manager for ""
	I0908 13:33:49.571898 1410784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:33:49.571914 1410784 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:33:49.572053 1410784 start.go:348] cluster config:
	{Name:download-only-359566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-359566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:33:49.573714 1410784 out.go:99] Starting "download-only-359566" primary control-plane node in "download-only-359566" cluster
	I0908 13:33:49.573746 1410784 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:33:49.575148 1410784 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:33:49.575180 1410784 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:33:49.575298 1410784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:33:49.592419 1410784 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:33:49.592631 1410784 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:33:49.592728 1410784 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:33:49.593584 1410784 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0908 13:33:49.593613 1410784 cache.go:58] Caching tarball of preloaded images
	I0908 13:33:49.593742 1410784 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:33:49.595730 1410784 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:33:49.595755 1410784 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0908 13:33:49.619769 1410784 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0908 13:33:52.567892 1410784 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0908 13:33:52.567979 1410784 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0908 13:33:53.682000 1410784 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I0908 13:33:53.682414 1410784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/download-only-359566/config.json ...
	I0908 13:33:53.682457 1410784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/download-only-359566/config.json: {Name:mkdc738409988db733669930625fa9e454d2f8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:33:53.682651 1410784 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:33:53.682861 1410784 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-359566 host does not exist
	  To start a cluster, run: "minikube start -p download-only-359566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-359566
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.671978873s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:33:59.659670 1410772 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 13:33:59.659745 1410772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-1407098/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196872
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196872: exit status 85 (73.469345ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-359566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-359566 │ jenkins │ v1.36.0 │ 08 Sep 25 13:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:33 UTC │ 08 Sep 25 13:33 UTC │
	│ delete  │ -p download-only-359566                                                                                                                                                               │ download-only-359566 │ jenkins │ v1.36.0 │ 08 Sep 25 13:33 UTC │ 08 Sep 25 13:33 UTC │
	│ start   │ -o=json --download-only -p download-only-196872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-196872 │ jenkins │ v1.36.0 │ 08 Sep 25 13:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:33:55
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:33:55.034306 1411132 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:33:55.034588 1411132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:33:55.034599 1411132 out.go:374] Setting ErrFile to fd 2...
	I0908 13:33:55.034604 1411132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:33:55.034872 1411132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:33:55.035516 1411132 out.go:368] Setting JSON to true
	I0908 13:33:55.036584 1411132 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11779,"bootTime":1757326656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:33:55.036699 1411132 start.go:140] virtualization: kvm guest
	I0908 13:33:55.038736 1411132 out.go:99] [download-only-196872] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:33:55.038920 1411132 notify.go:220] Checking for updates...
	I0908 13:33:55.040687 1411132 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:33:55.042477 1411132 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:33:55.044238 1411132 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 13:33:55.045614 1411132 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 13:33:55.047191 1411132 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:33:55.049995 1411132 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:33:55.050281 1411132 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:33:55.072258 1411132 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:33:55.072347 1411132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:33:55.125036 1411132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:51 SystemTime:2025-09-08 13:33:55.115125756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:33:55.125196 1411132 docker.go:318] overlay module found
	I0908 13:33:55.127247 1411132 out.go:99] Using the docker driver based on user configuration
	I0908 13:33:55.127294 1411132 start.go:304] selected driver: docker
	I0908 13:33:55.127307 1411132 start.go:918] validating driver "docker" against <nil>
	I0908 13:33:55.127478 1411132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:33:55.185382 1411132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:51 SystemTime:2025-09-08 13:33:55.175946591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:33:55.185586 1411132 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:33:55.186083 1411132 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 13:33:55.186214 1411132 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:33:55.188299 1411132 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-196872 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196872"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-196872
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.23s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-675530 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-675530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-675530
--- PASS: TestDownloadOnlyKic (1.23s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:34:01.641144 1410772 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-516718 --alsologtostderr --binary-mirror http://127.0.0.1:44473 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-516718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-516718
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (70.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-958177 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-958177 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m8.377821162s)
helpers_test.go:175: Cleaning up "offline-containerd-958177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-958177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-958177: (2.509759233s)
--- PASS: TestOffline (70.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-569758
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-569758: exit status 85 (65.993005ms)

                                                
                                                
-- stdout --
	* Profile "addons-569758" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-569758"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-569758
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-569758: exit status 85 (65.370346ms)

                                                
                                                
-- stdout --
	* Profile "addons-569758" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-569758"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-569758 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-569758 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.764145346s)
--- PASS: TestAddons/Setup (155.76s)

                                                
                                    
x
+
TestAddons/serial/Volcano (71.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 12.009099ms
addons_test.go:876: volcano-admission stabilized in 12.081803ms
addons_test.go:868: volcano-scheduler stabilized in 12.136423ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-s5g8r" [8a5f7cf8-e6f3-42a0-a311-0617de8083bb] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003745346s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-2w8ct" [53a4e6bf-0012-41a6-9bb6-d62b5d161fb3] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:352: "volcano-admission-589c7dd587-2w8ct" [53a4e6bf-0012-41a6-9bb6-d62b5d161fb3] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 37.004433799s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-b5crj" [61aa6f1d-40a2-46cd-b5e1-c2df6e96ccc3] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004002259s
addons_test.go:903: (dbg) Run:  kubectl --context addons-569758 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-569758 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-569758 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [75fd1991-db80-44ea-b693-46e7d8daa510] Pending
helpers_test.go:352: "test-job-nginx-0" [75fd1991-db80-44ea-b693-46e7d8daa510] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [75fd1991-db80-44ea-b693-46e7d8daa510] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004253961s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable volcano --alsologtostderr -v=1: (11.425328943s)
--- PASS: TestAddons/serial/Volcano (71.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-569758 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-569758 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-569758 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-569758 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [26de329c-3625-478e-8fcf-8dd5e62d60e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [26de329c-3625-478e-8fcf-8dd5e62d60e0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00468488s
addons_test.go:694: (dbg) Run:  kubectl --context addons-569758 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-569758 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-569758 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.486996ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-v6nvv" [e260d54e-0127-48cc-8199-9f38fa41a694] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00334462s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-qx9s9" [e7d03e77-ab35-4225-8849-95edf136dd6a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003624835s
addons_test.go:392: (dbg) Run:  kubectl --context addons-569758 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-569758 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-569758 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.241509197s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 ip
2025/09/08 13:38:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.31s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.635422ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-569758
addons_test.go:332: (dbg) Run:  kubectl --context addons-569758 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-569758 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-569758 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-569758 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1fdb7636-1816-441b-a751-0cb266541b5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1fdb7636-1816-441b-a751-0cb266541b5d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.071265063s
I0908 13:38:22.935766 1410772 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-569758 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable ingress --alsologtostderr -v=1: (7.826713267s)
--- PASS: TestAddons/parallel/Ingress (19.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-727h9" [b7a942e2-3155-4569-80f3-ccd5fa0f37b6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004628672s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.86811ms
I0908 13:38:07.658980 1410772 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:38:07.659007 1410772 kapi.go:107] duration metric: took 4.235215ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-r4vvx" [8dbb597a-ebe0-4025-8878-a340c1d58de7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003264748s
addons_test.go:463: (dbg) Run:  kubectl --context addons-569758 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.249395ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-569758 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-569758 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [16f813d7-8215-42f3-93c8-ee54a0a0d8f8] Pending
helpers_test.go:352: "task-pv-pod" [16f813d7-8215-42f3-93c8-ee54a0a0d8f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [16f813d7-8215-42f3-93c8-ee54a0a0d8f8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004100806s
addons_test.go:572: (dbg) Run:  kubectl --context addons-569758 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-569758 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-569758 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-569758 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-569758 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-569758 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-569758 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4b148e7d-e1d5-4128-9298-3f651e7f8aa3] Pending
helpers_test.go:352: "task-pv-pod-restore" [4b148e7d-e1d5-4128-9298-3f651e7f8aa3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4b148e7d-e1d5-4128-9298-3f651e7f8aa3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.004466499s
addons_test.go:614: (dbg) Run:  kubectl --context addons-569758 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-569758 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-569758 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.608359923s)
--- PASS: TestAddons/parallel/CSI (63.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (32.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-569758 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-c4x8l" [8634919c-aaf9-4b0d-8b9b-61367b6dae1e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-c4x8l" [8634919c-aaf9-4b0d-8b9b-61367b6dae1e] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 26.004483654s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable headlamp --alsologtostderr -v=1: (5.765598379s)
--- PASS: TestAddons/parallel/Headlamp (32.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-nr44v" [05453d57-15a0-4774-8c9e-4434e8b5f0d4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004496724s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-569758 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-569758 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c360e669-8e2a-4089-b5e3-a7cb36976f70] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c360e669-8e2a-4089-b5e3-a7cb36976f70] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c360e669-8e2a-4089-b5e3-a7cb36976f70] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003596065s
addons_test.go:967: (dbg) Run:  kubectl --context addons-569758 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 ssh "cat /opt/local-path-provisioner/pvc-6e1d09c3-f057-40e6-997d-3fc97ac8937e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-569758 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-569758 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.904184271s)
--- PASS: TestAddons/parallel/LocalPath (53.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I0908 13:38:07.654817 1410772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-n5ckt" [cef9510f-a641-4da7-bc6a-625dbb732766] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004080913s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9p4gb" [9d4760b8-6e47-49cf-8cf0-7282b705a56e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004567197s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-569758 addons disable yakd --alsologtostderr -v=1: (5.764257589s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-swhrf" [1af6d53f-8e46-47c4-becc-9d08f3d864c7] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003462121s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-569758 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-569758
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-569758: (11.969645322s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-569758
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-569758
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-569758
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (31.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-945269 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-945269 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.860337127s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-945269 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-945269 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-945269 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-945269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-945269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-945269: (2.500157026s)
--- PASS: TestCertOptions (31.05s)

                                                
                                    
x
+
TestCertExpiration (213.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884835 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884835 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.957174861s)
E0908 14:06:38.341785 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884835 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884835 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.022107301s)
helpers_test.go:175: Cleaning up "cert-expiration-884835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-884835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-884835: (2.441743091s)
--- PASS: TestCertExpiration (213.42s)

                                                
                                    
x
+
TestForceSystemdFlag (33.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-938734 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-938734 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.182496404s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-938734 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-938734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-938734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-938734: (1.993429629s)
--- PASS: TestForceSystemdFlag (33.45s)

                                                
                                    
x
+
TestForceSystemdEnv (31.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-139840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-139840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.544446022s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-139840 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-139840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-139840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-139840: (1.961625486s)
--- PASS: TestForceSystemdEnv (31.80s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.83s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-108744 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-108744 --driver=docker  --container-runtime=containerd: (26.448567294s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-108744"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PlYqb9gpqL3z/agent.1437108" SSH_AGENT_PID="1437109" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PlYqb9gpqL3z/agent.1437108" SSH_AGENT_PID="1437109" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PlYqb9gpqL3z/agent.1437108" SSH_AGENT_PID="1437109" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.032864857s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PlYqb9gpqL3z/agent.1437108" SSH_AGENT_PID="1437109" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-108744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-108744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-108744: (2.296989403s)
--- PASS: TestDockerEnvContainerd (42.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.94s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 14:06:04.141796 1410772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:06:04.141933 1410772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 14:06:04.189148 1410772 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 14:06:04.189407 1410772 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:06:04.189463 1410772 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate529275893/001/docker-machine-driver-kvm2
I0908 14:06:04.492921 1410772 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate529275893/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000646400 gz:0xc000646408 tar:0xc000646390 tar.bz2:0xc0006463a0 tar.gz:0xc0006463b0 tar.xz:0xc0006463d0 tar.zst:0xc0006463e0 tbz2:0xc0006463a0 tgz:0xc0006463b0 txz:0xc0006463d0 tzst:0xc0006463e0 xz:0xc000646410 zip:0xc000646420 zst:0xc000646418] Getters:map[file:0xc0016dad90 http:0xc001306cd0 https:0xc001306d20] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 14:06:04.492973 1410772 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate529275893/001/docker-machine-driver-kvm2
I0908 14:06:05.427242 1410772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:06:05.427340 1410772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 14:06:05.464624 1410772 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 14:06:05.464668 1410772 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 14:06:05.464811 1410772 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:06:05.464869 1410772 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate529275893/002/docker-machine-driver-kvm2
I0908 14:06:05.522746 1410772 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate529275893/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000646400 gz:0xc000646408 tar:0xc000646390 tar.bz2:0xc0006463a0 tar.gz:0xc0006463b0 tar.xz:0xc0006463d0 tar.zst:0xc0006463e0 tbz2:0xc0006463a0 tgz:0xc0006463b0 txz:0xc0006463d0 tzst:0xc0006463e0 xz:0xc000646410 zip:0xc000646420 zst:0xc000646418] Getters:map[file:0xc0025d7b00 http:0xc00070aff0 https:0xc00070b450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 14:06:05.522792 1410772 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate529275893/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.94s)

                                                
                                    
x
+
TestErrorSpam/setup (24.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-132752 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-132752 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-132752 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-132752 --driver=docker  --container-runtime=containerd: (24.424688859s)
--- PASS: TestErrorSpam/setup (24.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (2.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 stop: (2.308483849s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-132752 --log_dir /tmp/nospam-132752 stop
--- PASS: TestErrorSpam/stop (2.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-1407098/.minikube/files/etc/test/nested/copy/1410772/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-780336 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.767432115s)
--- PASS: TestFunctional/serial/StartWithProxy (45.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:41:37.584313 1410772 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --alsologtostderr -v=8
E0908 13:41:38.341134 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.347588 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.359044 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.380513 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.422015 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.503893 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.665731 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:38.987799 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:39.629594 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:40.911087 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:43.473269 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-780336 --alsologtostderr -v=8: (5.913337408s)
functional_test.go:678: soft start took 5.914073232s for "functional-780336" cluster.
I0908 13:41:43.498057 1410772 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (5.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-780336 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 cache add registry.k8s.io/pause:3.3: (1.019829673s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 cache add registry.k8s.io/pause:latest: (1.067412312s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-780336 /tmp/TestFunctionalserialCacheCmdcacheadd_local3042295632/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache add minikube-local-cache-test:functional-780336
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 cache add minikube-local-cache-test:functional-780336: (1.494073294s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache delete minikube-local-cache-test:functional-780336
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-780336
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
E0908 13:41:48.594888 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.878476ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 kubectl -- --context functional-780336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-780336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 13:41:58.837191 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:19.319201 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-780336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.26021046s)
functional_test.go:776: restart took 40.260366769s for "functional-780336" cluster.
I0908 13:42:31.226467 1410772 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-780336 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 logs: (1.443120128s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 logs --file /tmp/TestFunctionalserialLogsFileCmd2391178952/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 logs --file /tmp/TestFunctionalserialLogsFileCmd2391178952/001/logs.txt: (1.475579473s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-780336 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-780336
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-780336: exit status 115 (345.424758ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31882 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-780336 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 config get cpus: exit status 14 (107.030494ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 config get cpus: exit status 14 (80.583183ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-780336 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-780336 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1455901: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-780336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (167.062819ms)

                                                
                                                
-- stdout --
	* [functional-780336] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:42:41.184034 1454438 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:42:41.184381 1454438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:41.184395 1454438 out.go:374] Setting ErrFile to fd 2...
	I0908 13:42:41.184402 1454438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:41.184754 1454438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:42:41.185414 1454438 out.go:368] Setting JSON to false
	I0908 13:42:41.186718 1454438 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12305,"bootTime":1757326656,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:42:41.186790 1454438 start.go:140] virtualization: kvm guest
	I0908 13:42:41.188947 1454438 out.go:179] * [functional-780336] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:42:41.190380 1454438 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:42:41.190383 1454438 notify.go:220] Checking for updates...
	I0908 13:42:41.191634 1454438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:42:41.192982 1454438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 13:42:41.194283 1454438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 13:42:41.195600 1454438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:42:41.196918 1454438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:42:41.198863 1454438 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:42:41.199389 1454438 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:42:41.225712 1454438 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:42:41.225848 1454438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:42:41.281801 1454438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 13:42:41.271749283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:42:41.281908 1454438 docker.go:318] overlay module found
	I0908 13:42:41.283504 1454438 out.go:179] * Using the docker driver based on existing profile
	I0908 13:42:41.284850 1454438 start.go:304] selected driver: docker
	I0908 13:42:41.284876 1454438 start.go:918] validating driver "docker" against &{Name:functional-780336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-780336 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:42:41.285002 1454438 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:42:41.287403 1454438 out.go:203] 
	W0908 13:42:41.288689 1454438 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 13:42:41.290085 1454438 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-780336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (161.460571ms)

                                                
                                                
-- stdout --
	* [functional-780336] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:42:41.597648 1454705 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:42:41.597759 1454705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:41.597768 1454705 out.go:374] Setting ErrFile to fd 2...
	I0908 13:42:41.597772 1454705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:41.598098 1454705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:42:41.598716 1454705 out.go:368] Setting JSON to false
	I0908 13:42:41.599822 1454705 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12306,"bootTime":1757326656,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:42:41.599928 1454705 start.go:140] virtualization: kvm guest
	I0908 13:42:41.601723 1454705 out.go:179] * [functional-780336] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 13:42:41.603324 1454705 notify.go:220] Checking for updates...
	I0908 13:42:41.603329 1454705 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:42:41.605000 1454705 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:42:41.606514 1454705 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 13:42:41.608096 1454705 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 13:42:41.609680 1454705 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:42:41.610918 1454705 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:42:41.612639 1454705 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:42:41.613156 1454705 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:42:41.637630 1454705 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:42:41.637814 1454705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:42:41.694859 1454705 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:59 SystemTime:2025-09-08 13:42:41.684290954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:42:41.694973 1454705 docker.go:318] overlay module found
	I0908 13:42:41.697077 1454705 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 13:42:41.698482 1454705 start.go:304] selected driver: docker
	I0908 13:42:41.698509 1454705 start.go:918] validating driver "docker" against &{Name:functional-780336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-780336 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:42:41.698649 1454705 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:42:41.701258 1454705 out.go:203] 
	W0908 13:42:41.702712 1454705 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 13:42:41.703960 1454705 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-780336 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-780336 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-r7msr" [d7ad513d-c453-4946-aba1-996fd08ca257] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-r7msr" [d7ad513d-c453-4946-aba1-996fd08ca257] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.004647097s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30245
functional_test.go:1680: http://192.168.49.2:30245: success! body:
Request served by hello-node-connect-7d85dfc575-r7msr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30245
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [752a5764-628e-4792-880c-b6165df892ef] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003645022s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-780336 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-780336 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-780336 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-780336 apply -f testdata/storage-provisioner/pod.yaml
I0908 13:42:56.193881 1410772 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cc774629-1bca-4181-b308-8efeef0f0ea6] Pending
helpers_test.go:352: "sp-pod" [cc774629-1bca-4181-b308-8efeef0f0ea6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0908 13:43:00.281257 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [cc774629-1bca-4181-b308-8efeef0f0ea6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004675622s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-780336 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-780336 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-780336 delete -f testdata/storage-provisioner/pod.yaml: (2.572222381s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-780336 apply -f testdata/storage-provisioner/pod.yaml
I0908 13:43:20.004996 1410772 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1309cdf4-03fb-41b3-a61d-112eeb285b47] Pending
helpers_test.go:352: "sp-pod" [1309cdf4-03fb-41b3-a61d-112eeb285b47] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1309cdf4-03fb-41b3-a61d-112eeb285b47] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004247663s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-780336 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh -n functional-780336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cp functional-780336:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd854310374/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh -n functional-780336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh -n functional-780336 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-780336 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-j2kfw" [95c04292-610f-47d0-bd78-8839caacbf5f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-j2kfw" [95c04292-610f-47d0-bd78-8839caacbf5f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.005056349s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;": exit status 1 (118.376203ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:43:08.811329 1410772 retry.go:31] will retry after 900.453696ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;": exit status 1 (135.568433ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:43:09.848046 1410772 retry.go:31] will retry after 799.451136ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;": exit status 1 (198.081254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:43:10.846578 1410772 retry.go:31] will retry after 1.532879733s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-780336 exec mysql-5bb876957f-j2kfw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1410772/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /etc/test/nested/copy/1410772/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1410772.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /etc/ssl/certs/1410772.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1410772.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /usr/share/ca-certificates/1410772.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/14107722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /etc/ssl/certs/14107722.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/14107722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /usr/share/ca-certificates/14107722.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-780336 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "sudo systemctl is-active docker": exit status 1 (299.291422ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "sudo systemctl is-active crio": exit status 1 (276.328726ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-780336 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-780336 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4gnhc" [c7261158-b46d-429f-9f5a-e441c315345b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-4gnhc" [c7261158-b46d-429f-9f5a-e441c315345b] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003759968s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "405.770447ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.756328ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdany-port777061749/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757338959780308228" to /tmp/TestFunctionalparallelMountCmdany-port777061749/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757338959780308228" to /tmp/TestFunctionalparallelMountCmdany-port777061749/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757338959780308228" to /tmp/TestFunctionalparallelMountCmdany-port777061749/001/test-1757338959780308228
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (311.083796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:42:40.091729 1410772 retry.go:31] will retry after 411.100604ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:42 test-1757338959780308228
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh cat /mount-9p/test-1757338959780308228
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-780336 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4d3393ee-c489-46e5-8c5d-05eb8f3d81ba] Pending
helpers_test.go:352: "busybox-mount" [4d3393ee-c489-46e5-8c5d-05eb8f3d81ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4d3393ee-c489-46e5-8c5d-05eb8f3d81ba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4d3393ee-c489-46e5-8c5d-05eb8f3d81ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004760642s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-780336 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdany-port777061749/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "349.182832ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.754452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780336 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-780336
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-780336
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780336 image ls --format short --alsologtostderr:
I0908 13:43:13.071519 1460779 out.go:360] Setting OutFile to fd 1 ...
I0908 13:43:13.071635 1460779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.071644 1460779 out.go:374] Setting ErrFile to fd 2...
I0908 13:43:13.071649 1460779 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.071917 1460779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
I0908 13:43:13.072841 1460779 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.072994 1460779 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.073622 1460779 cli_runner.go:164] Run: docker container inspect functional-780336 --format={{.State.Status}}
I0908 13:43:13.098424 1460779 ssh_runner.go:195] Run: systemctl --version
I0908 13:43:13.098481 1460779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-780336
I0908 13:43:13.120610 1460779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/functional-780336/id_rsa Username:docker}
I0908 13:43:13.214454 1460779 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780336 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:4a8601 │ 22.5MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-780336  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-780336  │ sha256:28c409 │ 989B   │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ latest             │ sha256:ad5708 │ 72.3MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780336 image ls --format table --alsologtostderr:
I0908 13:43:13.583980 1461125 out.go:360] Setting OutFile to fd 1 ...
I0908 13:43:13.584487 1461125 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.584537 1461125 out.go:374] Setting ErrFile to fd 2...
I0908 13:43:13.584550 1461125 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.585035 1461125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
I0908 13:43:13.586569 1461125 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.586732 1461125 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.587254 1461125 cli_runner.go:164] Run: docker container inspect functional-780336 --format={{.State.Status}}
I0908 13:43:13.610945 1461125 ssh_runner.go:195] Run: systemctl --version
I0908 13:43:13.611006 1461125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-780336
I0908 13:43:13.631660 1461125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/functional-780336/id_rsa Username:docker}
I0908 13:43:13.718758 1461125 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780336 image ls --format json --alsologtostderr:
[{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:a0af72f2ec6d628152b015a4
6d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-780336","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ad5708199ec7d169c6837fe46e1646603d0f7
d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"72324501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"}
,{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:28c409876c35610ac69d8d1b1b5520588da9e69c25eca281bbcefe90a9cd5d42","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-780336"],"size":"989"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b2
6ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22477192"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"},{"id":"sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780336 image ls --format json --alsologtostderr:
I0908 13:43:13.351933 1460979 out.go:360] Setting OutFile to fd 1 ...
I0908 13:43:13.352257 1460979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.352267 1460979 out.go:374] Setting ErrFile to fd 2...
I0908 13:43:13.352271 1460979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.352461 1460979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
I0908 13:43:13.353084 1460979 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.353182 1460979 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.353754 1460979 cli_runner.go:164] Run: docker container inspect functional-780336 --format={{.State.Status}}
I0908 13:43:13.377549 1460979 ssh_runner.go:195] Run: systemctl --version
I0908 13:43:13.377619 1460979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-780336
I0908 13:43:13.398805 1460979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/functional-780336/id_rsa Username:docker}
I0908 13:43:13.490107 1460979 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780336 image ls --format yaml --alsologtostderr:
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22477192"
- id: sha256:ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "72324501"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-780336
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:28c409876c35610ac69d8d1b1b5520588da9e69c25eca281bbcefe90a9cd5d42
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-780336
size: "989"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780336 image ls --format yaml --alsologtostderr:
I0908 13:43:13.112185 1460802 out.go:360] Setting OutFile to fd 1 ...
I0908 13:43:13.112480 1460802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.112489 1460802 out.go:374] Setting ErrFile to fd 2...
I0908 13:43:13.112494 1460802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.112716 1460802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
I0908 13:43:13.113319 1460802 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.113525 1460802 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.114015 1460802 cli_runner.go:164] Run: docker container inspect functional-780336 --format={{.State.Status}}
I0908 13:43:13.135421 1460802 ssh_runner.go:195] Run: systemctl --version
I0908 13:43:13.135473 1460802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-780336
I0908 13:43:13.157600 1460802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/functional-780336/id_rsa Username:docker}
I0908 13:43:13.250682 1460802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh pgrep buildkitd: exit status 1 (269.682283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image build -t localhost/my-image:functional-780336 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 image build -t localhost/my-image:functional-780336 testdata/build --alsologtostderr: (5.908229488s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780336 image build -t localhost/my-image:functional-780336 testdata/build --alsologtostderr:
I0908 13:43:13.583290 1461119 out.go:360] Setting OutFile to fd 1 ...
I0908 13:43:13.583415 1461119 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.583424 1461119 out.go:374] Setting ErrFile to fd 2...
I0908 13:43:13.583428 1461119 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:43:13.583626 1461119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
I0908 13:43:13.584270 1461119 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.584966 1461119 config.go:182] Loaded profile config "functional-780336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:43:13.585552 1461119 cli_runner.go:164] Run: docker container inspect functional-780336 --format={{.State.Status}}
I0908 13:43:13.609759 1461119 ssh_runner.go:195] Run: systemctl --version
I0908 13:43:13.609835 1461119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-780336
I0908 13:43:13.631357 1461119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/functional-780336/id_rsa Username:docker}
I0908 13:43:13.718731 1461119 build_images.go:161] Building image from path: /tmp/build.2080716666.tar
I0908 13:43:13.718789 1461119 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 13:43:13.728384 1461119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2080716666.tar
I0908 13:43:13.732479 1461119 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2080716666.tar: stat -c "%s %y" /var/lib/minikube/build/build.2080716666.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2080716666.tar': No such file or directory
I0908 13:43:13.732525 1461119 ssh_runner.go:362] scp /tmp/build.2080716666.tar --> /var/lib/minikube/build/build.2080716666.tar (3072 bytes)
I0908 13:43:13.758762 1461119 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2080716666
I0908 13:43:13.768284 1461119 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2080716666 -xf /var/lib/minikube/build/build.2080716666.tar
I0908 13:43:13.777537 1461119 containerd.go:394] Building image: /var/lib/minikube/build/build.2080716666
I0908 13:43:13.777602 1461119 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2080716666 --local dockerfile=/var/lib/minikube/build/build.2080716666 --output type=image,name=localhost/my-image:functional-780336
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:bfeb33573e2f2e05e90ccd846442d05651d34931f2e16a4a931f5e343dd3557f 0.0s done
#8 exporting config sha256:36880ff55ca2f383b68e59bdb5c23179409352faef3497dec1eb3e7ac1fcc029 done
#8 naming to localhost/my-image:functional-780336 done
#8 DONE 0.1s
I0908 13:43:19.409311 1461119 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2080716666 --local dockerfile=/var/lib/minikube/build/build.2080716666 --output type=image,name=localhost/my-image:functional-780336: (5.631645547s)
I0908 13:43:19.409400 1461119 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2080716666
I0908 13:43:19.419929 1461119 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2080716666.tar
I0908 13:43:19.429426 1461119 build_images.go:217] Built localhost/my-image:functional-780336 from /tmp/build.2080716666.tar
I0908 13:43:19.429470 1461119 build_images.go:133] succeeded building to: functional-780336
I0908 13:43:19.429476 1461119 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.488539682s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-780336
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image load --daemon kicbase/echo-server:functional-780336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image load --daemon kicbase/echo-server:functional-780336 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-780336 image load --daemon kicbase/echo-server:functional-780336 --alsologtostderr: (1.058009938s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-780336
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image load --daemon kicbase/echo-server:functional-780336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image save kicbase/echo-server:functional-780336 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image rm kicbase/echo-server:functional-780336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service list -o json
functional_test.go:1504: Took "597.94936ms" to run "out/minikube-linux-amd64 -p functional-780336 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdspecific-port4006544909/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (384.176751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:42:47.941620 1410772 retry.go:31] will retry after 386.491977ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdspecific-port4006544909/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "sudo umount -f /mount-9p": exit status 1 (336.581527ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-780336 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdspecific-port4006544909/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30882
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-780336
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 image save --daemon kicbase/echo-server:functional-780336 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-780336
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30882
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T" /mount1: exit status 1 (389.921317ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:42:49.901164 1410772 retry.go:31] will retry after 340.907107ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780336 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-780336 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457142535/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1459141: os: process already finished
helpers_test.go:525: unable to kill pid 1458978: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-780336 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a1549736-7a78-4f74-8ff9-afd94580ccfd] Pending
2025/09/08 13:42:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "nginx-svc" [a1549736-7a78-4f74-8ff9-afd94580ccfd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a1549736-7a78-4f74-8ff9-afd94580ccfd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.004014126s
I0908 13:43:12.332351 1410772 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-780336 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.232.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-780336 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-780336
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-780336
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-780336
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 13:44:22.203002 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m41.011389039s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (101.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 kubectl -- rollout status deployment/busybox: (3.248848625s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-dv2n7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-qt65m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-wsqrx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-dv2n7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-qt65m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-wsqrx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-dv2n7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-qt65m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-wsqrx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-dv2n7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-dv2n7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-qt65m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-qt65m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-wsqrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 kubectl -- exec busybox-7b57f96db7-wsqrx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (12.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 node add --alsologtostderr -v 5: (11.410540343s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (12.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-255415 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp testdata/cp-test.txt ha-255415:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3728984178/001/cp-test_ha-255415.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415:/home/docker/cp-test.txt ha-255415-m02:/home/docker/cp-test_ha-255415_ha-255415-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test_ha-255415_ha-255415-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415:/home/docker/cp-test.txt ha-255415-m03:/home/docker/cp-test_ha-255415_ha-255415-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test_ha-255415_ha-255415-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415:/home/docker/cp-test.txt ha-255415-m04:/home/docker/cp-test_ha-255415_ha-255415-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test_ha-255415_ha-255415-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp testdata/cp-test.txt ha-255415-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3728984178/001/cp-test_ha-255415-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m02:/home/docker/cp-test.txt ha-255415:/home/docker/cp-test_ha-255415-m02_ha-255415.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test_ha-255415-m02_ha-255415.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m02:/home/docker/cp-test.txt ha-255415-m03:/home/docker/cp-test_ha-255415-m02_ha-255415-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test_ha-255415-m02_ha-255415-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m02:/home/docker/cp-test.txt ha-255415-m04:/home/docker/cp-test_ha-255415-m02_ha-255415-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test_ha-255415-m02_ha-255415-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp testdata/cp-test.txt ha-255415-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3728984178/001/cp-test_ha-255415-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m03:/home/docker/cp-test.txt ha-255415:/home/docker/cp-test_ha-255415-m03_ha-255415.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test_ha-255415-m03_ha-255415.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m03:/home/docker/cp-test.txt ha-255415-m02:/home/docker/cp-test_ha-255415-m03_ha-255415-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test_ha-255415-m03_ha-255415-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m03:/home/docker/cp-test.txt ha-255415-m04:/home/docker/cp-test_ha-255415-m03_ha-255415-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test_ha-255415-m03_ha-255415-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp testdata/cp-test.txt ha-255415-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3728984178/001/cp-test_ha-255415-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m04:/home/docker/cp-test.txt ha-255415:/home/docker/cp-test_ha-255415-m04_ha-255415.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415 "sudo cat /home/docker/cp-test_ha-255415-m04_ha-255415.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m04:/home/docker/cp-test.txt ha-255415-m02:/home/docker/cp-test_ha-255415-m04_ha-255415-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m02 "sudo cat /home/docker/cp-test_ha-255415-m04_ha-255415-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 cp ha-255415-m04:/home/docker/cp-test.txt ha-255415-m03:/home/docker/cp-test_ha-255415-m04_ha-255415-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 ssh -n ha-255415-m03 "sudo cat /home/docker/cp-test_ha-255415-m04_ha-255415-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 node stop m02 --alsologtostderr -v 5: (11.950800787s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5: exit status 7 (696.918622ms)

                                                
                                                
-- stdout --
	ha-255415
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-255415-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255415-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-255415-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:46:00.115728 1482249 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:46:00.116027 1482249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:00.116036 1482249 out.go:374] Setting ErrFile to fd 2...
	I0908 13:46:00.116041 1482249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:00.116271 1482249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:46:00.116476 1482249 out.go:368] Setting JSON to false
	I0908 13:46:00.116513 1482249 mustload.go:65] Loading cluster: ha-255415
	I0908 13:46:00.116599 1482249 notify.go:220] Checking for updates...
	I0908 13:46:00.117011 1482249 config.go:182] Loaded profile config "ha-255415": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:46:00.117038 1482249 status.go:174] checking status of ha-255415 ...
	I0908 13:46:00.117540 1482249 cli_runner.go:164] Run: docker container inspect ha-255415 --format={{.State.Status}}
	I0908 13:46:00.140185 1482249 status.go:371] ha-255415 host status = "Running" (err=<nil>)
	I0908 13:46:00.140214 1482249 host.go:66] Checking if "ha-255415" exists ...
	I0908 13:46:00.140483 1482249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255415
	I0908 13:46:00.161755 1482249 host.go:66] Checking if "ha-255415" exists ...
	I0908 13:46:00.162155 1482249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:46:00.162215 1482249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255415
	I0908 13:46:00.182899 1482249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/ha-255415/id_rsa Username:docker}
	I0908 13:46:00.286881 1482249 ssh_runner.go:195] Run: systemctl --version
	I0908 13:46:00.291529 1482249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:46:00.303425 1482249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:00.356062 1482249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-08 13:46:00.345129625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:00.356683 1482249 kubeconfig.go:125] found "ha-255415" server: "https://192.168.49.254:8443"
	I0908 13:46:00.356721 1482249 api_server.go:166] Checking apiserver status ...
	I0908 13:46:00.356766 1482249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:46:00.368924 1482249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1639/cgroup
	I0908 13:46:00.378423 1482249 api_server.go:182] apiserver freezer: "11:freezer:/docker/192c8e6c076cfb1b6ea216005ca69064422f39ebb850bdea5b771bfecebf0543/kubepods/burstable/pod8c65c9c1ac67b1d3c27fe1acb0589bb9/de8ae6df18a2356b7852f3e65a4275f4f5a30125d99f5130abceb45d62e51e32"
	I0908 13:46:00.378490 1482249 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/192c8e6c076cfb1b6ea216005ca69064422f39ebb850bdea5b771bfecebf0543/kubepods/burstable/pod8c65c9c1ac67b1d3c27fe1acb0589bb9/de8ae6df18a2356b7852f3e65a4275f4f5a30125d99f5130abceb45d62e51e32/freezer.state
	I0908 13:46:00.387509 1482249 api_server.go:204] freezer state: "THAWED"
	I0908 13:46:00.387548 1482249 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:46:00.393621 1482249 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:46:00.393652 1482249 status.go:463] ha-255415 apiserver status = Running (err=<nil>)
	I0908 13:46:00.393664 1482249 status.go:176] ha-255415 status: &{Name:ha-255415 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:46:00.393682 1482249 status.go:174] checking status of ha-255415-m02 ...
	I0908 13:46:00.393928 1482249 cli_runner.go:164] Run: docker container inspect ha-255415-m02 --format={{.State.Status}}
	I0908 13:46:00.411638 1482249 status.go:371] ha-255415-m02 host status = "Stopped" (err=<nil>)
	I0908 13:46:00.411672 1482249 status.go:384] host is not running, skipping remaining checks
	I0908 13:46:00.411682 1482249 status.go:176] ha-255415-m02 status: &{Name:ha-255415-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:46:00.411711 1482249 status.go:174] checking status of ha-255415-m03 ...
	I0908 13:46:00.412004 1482249 cli_runner.go:164] Run: docker container inspect ha-255415-m03 --format={{.State.Status}}
	I0908 13:46:00.430183 1482249 status.go:371] ha-255415-m03 host status = "Running" (err=<nil>)
	I0908 13:46:00.430212 1482249 host.go:66] Checking if "ha-255415-m03" exists ...
	I0908 13:46:00.430467 1482249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255415-m03
	I0908 13:46:00.448929 1482249 host.go:66] Checking if "ha-255415-m03" exists ...
	I0908 13:46:00.449290 1482249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:46:00.449346 1482249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255415-m03
	I0908 13:46:00.470069 1482249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/ha-255415-m03/id_rsa Username:docker}
	I0908 13:46:00.554955 1482249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:46:00.566891 1482249 kubeconfig.go:125] found "ha-255415" server: "https://192.168.49.254:8443"
	I0908 13:46:00.566923 1482249 api_server.go:166] Checking apiserver status ...
	I0908 13:46:00.566960 1482249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:46:00.577929 1482249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1516/cgroup
	I0908 13:46:00.588144 1482249 api_server.go:182] apiserver freezer: "11:freezer:/docker/c8be7bcc0fd7ddaa25d52cd23d5c812e47be1e578091b6d879308458f06dcc8f/kubepods/burstable/pod706cce20a68fa0b46bdea78f89caaad3/884047f286d370fbdd4f87a738a08c85a019e94a3a3d2a737d1fd0a60b36c00c"
	I0908 13:46:00.588207 1482249 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c8be7bcc0fd7ddaa25d52cd23d5c812e47be1e578091b6d879308458f06dcc8f/kubepods/burstable/pod706cce20a68fa0b46bdea78f89caaad3/884047f286d370fbdd4f87a738a08c85a019e94a3a3d2a737d1fd0a60b36c00c/freezer.state
	I0908 13:46:00.597581 1482249 api_server.go:204] freezer state: "THAWED"
	I0908 13:46:00.597621 1482249 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:46:00.601857 1482249 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:46:00.601886 1482249 status.go:463] ha-255415-m03 apiserver status = Running (err=<nil>)
	I0908 13:46:00.601896 1482249 status.go:176] ha-255415-m03 status: &{Name:ha-255415-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:46:00.601913 1482249 status.go:174] checking status of ha-255415-m04 ...
	I0908 13:46:00.602202 1482249 cli_runner.go:164] Run: docker container inspect ha-255415-m04 --format={{.State.Status}}
	I0908 13:46:00.620784 1482249 status.go:371] ha-255415-m04 host status = "Running" (err=<nil>)
	I0908 13:46:00.620812 1482249 host.go:66] Checking if "ha-255415-m04" exists ...
	I0908 13:46:00.621088 1482249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255415-m04
	I0908 13:46:00.641064 1482249 host.go:66] Checking if "ha-255415-m04" exists ...
	I0908 13:46:00.641444 1482249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:46:00.641494 1482249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255415-m04
	I0908 13:46:00.660154 1482249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/ha-255415-m04/id_rsa Username:docker}
	I0908 13:46:00.746610 1482249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:46:00.758439 1482249 status.go:176] ha-255415-m04 status: &{Name:ha-255415-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (10.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 node start m02 --alsologtostderr -v 5: (9.613428312s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (10.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 stop --alsologtostderr -v 5
E0908 13:46:38.341455 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 stop --alsologtostderr -v 5: (36.870170564s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 start --wait true --alsologtostderr -v 5
E0908 13:47:06.045024 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:38.874884 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:38.881455 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:38.892989 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:38.914532 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:38.956043 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:39.037645 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:39.199313 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:39.521118 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:40.162710 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:41.444388 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:44.006422 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:49.128109 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 start --wait true --alsologtostderr -v 5: (59.892689081s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 node delete m03 --alsologtostderr -v 5: (8.510458797s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0908 13:47:59.370175 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 stop --alsologtostderr -v 5
E0908 13:48:19.851619 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 stop --alsologtostderr -v 5: (35.834627015s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5: exit status 7 (113.214348ms)

                                                
                                                
-- stdout --
	ha-255415
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255415-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255415-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:48:35.754870 1499342 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:48:35.755183 1499342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:48:35.755196 1499342 out.go:374] Setting ErrFile to fd 2...
	I0908 13:48:35.755202 1499342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:48:35.755440 1499342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:48:35.755640 1499342 out.go:368] Setting JSON to false
	I0908 13:48:35.755680 1499342 mustload.go:65] Loading cluster: ha-255415
	I0908 13:48:35.755782 1499342 notify.go:220] Checking for updates...
	I0908 13:48:35.756148 1499342 config.go:182] Loaded profile config "ha-255415": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:48:35.756176 1499342 status.go:174] checking status of ha-255415 ...
	I0908 13:48:35.756626 1499342 cli_runner.go:164] Run: docker container inspect ha-255415 --format={{.State.Status}}
	I0908 13:48:35.775753 1499342 status.go:371] ha-255415 host status = "Stopped" (err=<nil>)
	I0908 13:48:35.775780 1499342 status.go:384] host is not running, skipping remaining checks
	I0908 13:48:35.775788 1499342 status.go:176] ha-255415 status: &{Name:ha-255415 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:48:35.775815 1499342 status.go:174] checking status of ha-255415-m02 ...
	I0908 13:48:35.776087 1499342 cli_runner.go:164] Run: docker container inspect ha-255415-m02 --format={{.State.Status}}
	I0908 13:48:35.795954 1499342 status.go:371] ha-255415-m02 host status = "Stopped" (err=<nil>)
	I0908 13:48:35.795991 1499342 status.go:384] host is not running, skipping remaining checks
	I0908 13:48:35.796000 1499342 status.go:176] ha-255415-m02 status: &{Name:ha-255415-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:48:35.796031 1499342 status.go:174] checking status of ha-255415-m04 ...
	I0908 13:48:35.796311 1499342 cli_runner.go:164] Run: docker container inspect ha-255415-m04 --format={{.State.Status}}
	I0908 13:48:35.814610 1499342 status.go:371] ha-255415-m04 host status = "Stopped" (err=<nil>)
	I0908 13:48:35.814654 1499342 status.go:384] host is not running, skipping remaining checks
	I0908 13:48:35.814672 1499342 status.go:176] ha-255415-m04 status: &{Name:ha-255415-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 13:49:00.814642 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.70713883s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (26.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 node add --control-plane --alsologtostderr -v 5: (25.135615895s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-amd64 -p ha-255415 status --alsologtostderr -v 5: (1.024482001s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (26.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.001933582s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-103610 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0908 13:50:22.736147 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-103610 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (48.350658565s)
--- PASS: TestJSONOutput/start/Command (48.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-103610 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-103610 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-103610 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-103610 --output=json --user=testUser: (5.764592327s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-956622 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-956622 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.626811ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6200b21e-1600-4d72-88c9-bfe1846daef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-956622] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"75ec422d-f0f1-40f0-b39a-a7372fd7a24f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"035a321b-2f1a-440a-85a1-56978334a1c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"46c1bdc6-e648-4523-a31a-3b4258132c7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig"}}
	{"specversion":"1.0","id":"22cb0a9d-bdfb-428a-acb7-58218b479376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube"}}
	{"specversion":"1.0","id":"23cea38b-14bd-45f5-b06a-6bc1b67477fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d80bb40-b656-4442-a966-d3724cea7f6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"44ce8940-cc4c-49b0-b5d4-0c7e000556dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-956622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-956622
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-472295 --network=
E0908 13:51:38.341366 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-472295 --network=: (29.237162452s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-472295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-472295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-472295: (2.092095116s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-485903 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-485903 --network=bridge: (24.062332925s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-485903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-485903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-485903: (1.936574977s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.02s)

                                                
                                    
x
+
TestKicExistingNetwork (25.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 13:52:06.664653 1410772 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 13:52:06.682604 1410772 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 13:52:06.682699 1410772 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 13:52:06.682734 1410772 cli_runner.go:164] Run: docker network inspect existing-network
W0908 13:52:06.699636 1410772 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 13:52:06.699673 1410772 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 13:52:06.699692 1410772 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 13:52:06.699811 1410772 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 13:52:06.716801 1410772 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64b2234f707e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:9a:1c:9c:54:4c} reservation:<nil>}
I0908 13:52:06.717482 1410772 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021db770}
I0908 13:52:06.717522 1410772 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 13:52:06.717595 1410772 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 13:52:06.773717 1410772 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-743714 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-743714 --network=existing-network: (23.322517071s)
helpers_test.go:175: Cleaning up "existing-network-743714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-743714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-743714: (1.98096753s)
I0908 13:52:32.095689 1410772 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.45s)

                                                
                                    
x
+
TestKicCustomSubnet (26.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-946989 --subnet=192.168.60.0/24
E0908 13:52:38.875688 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-946989 --subnet=192.168.60.0/24: (24.610574128s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-946989 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-946989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-946989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-946989: (2.149829985s)
--- PASS: TestKicCustomSubnet (26.78s)

                                                
                                    
x
+
TestKicStaticIP (26.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-161560 --static-ip=192.168.200.200
E0908 13:53:06.581494 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-161560 --static-ip=192.168.200.200: (24.524634307s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-161560 ip
helpers_test.go:175: Cleaning up "static-ip-161560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-161560
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-161560: (2.12678118s)
--- PASS: TestKicStaticIP (26.79s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (55.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-395788 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-395788 --driver=docker  --container-runtime=containerd: (23.684157404s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-408625 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-408625 --driver=docker  --container-runtime=containerd: (26.891778592s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-395788
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-408625
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-408625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-408625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-408625: (1.874405743s)
helpers_test.go:175: Cleaning up "first-395788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-395788
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-395788: (2.268002817s)
--- PASS: TestMinikubeProfile (55.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-577188 --memory=3072 --mount-string /tmp/TestMountStartserial3969221598/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-577188 --memory=3072 --mount-string /tmp/TestMountStartserial3969221598/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.471317707s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-577188 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-591407 --memory=3072 --mount-string /tmp/TestMountStartserial3969221598/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-591407 --memory=3072 --mount-string /tmp/TestMountStartserial3969221598/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.294563359s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-591407 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-577188 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-577188 --alsologtostderr -v=5: (1.618468765s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-591407 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-591407
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-591407: (1.188270128s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-591407
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-591407: (5.945384476s)
--- PASS: TestMountStart/serial/RestartStopped (6.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-591407 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (54.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129197 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129197 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.846628976s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (54.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-129197 -- rollout status deployment/busybox: (16.418911594s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-7nllc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-qrcbs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-7nllc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-qrcbs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-7nllc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-qrcbs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-7nllc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-7nllc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-qrcbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-129197 -- exec busybox-7b57f96db7-qrcbs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-129197 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-129197 -v=5 --alsologtostderr: (14.320230953s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-129197 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp testdata/cp-test.txt multinode-129197:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3071124852/001/cp-test_multinode-129197.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197:/home/docker/cp-test.txt multinode-129197-m02:/home/docker/cp-test_multinode-129197_multinode-129197-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test_multinode-129197_multinode-129197-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197:/home/docker/cp-test.txt multinode-129197-m03:/home/docker/cp-test_multinode-129197_multinode-129197-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test_multinode-129197_multinode-129197-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp testdata/cp-test.txt multinode-129197-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3071124852/001/cp-test_multinode-129197-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m02:/home/docker/cp-test.txt multinode-129197:/home/docker/cp-test_multinode-129197-m02_multinode-129197.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test_multinode-129197-m02_multinode-129197.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m02:/home/docker/cp-test.txt multinode-129197-m03:/home/docker/cp-test_multinode-129197-m02_multinode-129197-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test_multinode-129197-m02_multinode-129197-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp testdata/cp-test.txt multinode-129197-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3071124852/001/cp-test_multinode-129197-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m03:/home/docker/cp-test.txt multinode-129197:/home/docker/cp-test_multinode-129197-m03_multinode-129197.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197 "sudo cat /home/docker/cp-test_multinode-129197-m03_multinode-129197.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 cp multinode-129197-m03:/home/docker/cp-test.txt multinode-129197-m02:/home/docker/cp-test_multinode-129197-m03_multinode-129197-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 ssh -n multinode-129197-m02 "sudo cat /home/docker/cp-test_multinode-129197-m03_multinode-129197-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-129197 node stop m03: (1.190598025s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129197 status: exit status 7 (470.127092ms)

                                                
                                                
-- stdout --
	multinode-129197
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-129197-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-129197-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr: exit status 7 (479.416439ms)

                                                
                                                
-- stdout --
	multinode-129197
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-129197-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-129197-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:56:24.595025 1563979 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:24.595286 1563979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:24.595302 1563979 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:24.595306 1563979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:24.595495 1563979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:56:24.595687 1563979 out.go:368] Setting JSON to false
	I0908 13:56:24.595720 1563979 mustload.go:65] Loading cluster: multinode-129197
	I0908 13:56:24.595847 1563979 notify.go:220] Checking for updates...
	I0908 13:56:24.596139 1563979 config.go:182] Loaded profile config "multinode-129197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:56:24.596160 1563979 status.go:174] checking status of multinode-129197 ...
	I0908 13:56:24.596566 1563979 cli_runner.go:164] Run: docker container inspect multinode-129197 --format={{.State.Status}}
	I0908 13:56:24.617571 1563979 status.go:371] multinode-129197 host status = "Running" (err=<nil>)
	I0908 13:56:24.617618 1563979 host.go:66] Checking if "multinode-129197" exists ...
	I0908 13:56:24.617882 1563979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-129197
	I0908 13:56:24.636809 1563979 host.go:66] Checking if "multinode-129197" exists ...
	I0908 13:56:24.637108 1563979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:56:24.637151 1563979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-129197
	I0908 13:56:24.655327 1563979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/multinode-129197/id_rsa Username:docker}
	I0908 13:56:24.746855 1563979 ssh_runner.go:195] Run: systemctl --version
	I0908 13:56:24.751426 1563979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:56:24.763011 1563979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:24.814244 1563979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2025-09-08 13:56:24.804181818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:56:24.814795 1563979 kubeconfig.go:125] found "multinode-129197" server: "https://192.168.67.2:8443"
	I0908 13:56:24.814830 1563979 api_server.go:166] Checking apiserver status ...
	I0908 13:56:24.814875 1563979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:56:24.825964 1563979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1517/cgroup
	I0908 13:56:24.834946 1563979 api_server.go:182] apiserver freezer: "11:freezer:/docker/893a1c0053bba68c6d483f4a8619d26422e634c546899e76770d8aed46aa59d3/kubepods/burstable/poddae44d282400a1e5d6017d30c3b79a32/00de86b31bba3973e2cdd9c281aa22fa071bb2a2359b62dfffed3f6c1931e317"
	I0908 13:56:24.835016 1563979 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/893a1c0053bba68c6d483f4a8619d26422e634c546899e76770d8aed46aa59d3/kubepods/burstable/poddae44d282400a1e5d6017d30c3b79a32/00de86b31bba3973e2cdd9c281aa22fa071bb2a2359b62dfffed3f6c1931e317/freezer.state
	I0908 13:56:24.843700 1563979 api_server.go:204] freezer state: "THAWED"
	I0908 13:56:24.843732 1563979 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:56:24.847954 1563979 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:56:24.847984 1563979 status.go:463] multinode-129197 apiserver status = Running (err=<nil>)
	I0908 13:56:24.847998 1563979 status.go:176] multinode-129197 status: &{Name:multinode-129197 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:56:24.848024 1563979 status.go:174] checking status of multinode-129197-m02 ...
	I0908 13:56:24.848312 1563979 cli_runner.go:164] Run: docker container inspect multinode-129197-m02 --format={{.State.Status}}
	I0908 13:56:24.865934 1563979 status.go:371] multinode-129197-m02 host status = "Running" (err=<nil>)
	I0908 13:56:24.865968 1563979 host.go:66] Checking if "multinode-129197-m02" exists ...
	I0908 13:56:24.866261 1563979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-129197-m02
	I0908 13:56:24.885486 1563979 host.go:66] Checking if "multinode-129197-m02" exists ...
	I0908 13:56:24.885758 1563979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:56:24.885802 1563979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-129197-m02
	I0908 13:56:24.903916 1563979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21508-1407098/.minikube/machines/multinode-129197-m02/id_rsa Username:docker}
	I0908 13:56:24.990654 1563979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:56:25.001678 1563979 status.go:176] multinode-129197-m02 status: &{Name:multinode-129197-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:56:25.001732 1563979 status.go:174] checking status of multinode-129197-m03 ...
	I0908 13:56:25.001998 1563979 cli_runner.go:164] Run: docker container inspect multinode-129197-m03 --format={{.State.Status}}
	I0908 13:56:25.019805 1563979 status.go:371] multinode-129197-m03 host status = "Stopped" (err=<nil>)
	I0908 13:56:25.019830 1563979 status.go:384] host is not running, skipping remaining checks
	I0908 13:56:25.019837 1563979 status.go:176] multinode-129197-m03 status: &{Name:multinode-129197-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-129197 node start m03 -v=5 --alsologtostderr: (6.278875861s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129197
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-129197
E0908 13:56:38.344148 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-129197: (24.905674511s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129197 --wait=true -v=5 --alsologtostderr
E0908 13:57:38.875800 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129197 --wait=true -v=5 --alsologtostderr: (55.721204957s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129197
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-129197 node delete m03: (4.616007141s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 stop
E0908 13:58:01.406908 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-129197 stop: (23.686540351s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129197 status: exit status 7 (95.143462ms)

                                                
                                                
-- stdout --
	multinode-129197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-129197-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr: exit status 7 (91.072053ms)

                                                
                                                
-- stdout --
	multinode-129197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-129197-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:58:21.757669 1574307 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:58:21.757845 1574307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:58:21.757853 1574307 out.go:374] Setting ErrFile to fd 2...
	I0908 13:58:21.757857 1574307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:58:21.758044 1574307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 13:58:21.758220 1574307 out.go:368] Setting JSON to false
	I0908 13:58:21.758255 1574307 mustload.go:65] Loading cluster: multinode-129197
	I0908 13:58:21.758330 1574307 notify.go:220] Checking for updates...
	I0908 13:58:21.758663 1574307 config.go:182] Loaded profile config "multinode-129197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:58:21.758685 1574307 status.go:174] checking status of multinode-129197 ...
	I0908 13:58:21.759152 1574307 cli_runner.go:164] Run: docker container inspect multinode-129197 --format={{.State.Status}}
	I0908 13:58:21.777732 1574307 status.go:371] multinode-129197 host status = "Stopped" (err=<nil>)
	I0908 13:58:21.777763 1574307 status.go:384] host is not running, skipping remaining checks
	I0908 13:58:21.777772 1574307 status.go:176] multinode-129197 status: &{Name:multinode-129197 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:58:21.777804 1574307 status.go:174] checking status of multinode-129197-m02 ...
	I0908 13:58:21.778175 1574307 cli_runner.go:164] Run: docker container inspect multinode-129197-m02 --format={{.State.Status}}
	I0908 13:58:21.796898 1574307 status.go:371] multinode-129197-m02 host status = "Stopped" (err=<nil>)
	I0908 13:58:21.796933 1574307 status.go:384] host is not running, skipping remaining checks
	I0908 13:58:21.796940 1574307 status.go:176] multinode-129197-m02 status: &{Name:multinode-129197-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129197 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129197 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.416068019s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-129197 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-129197
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129197-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-129197-m02 --driver=docker  --container-runtime=containerd: exit status 14 (71.666513ms)

                                                
                                                
-- stdout --
	* [multinode-129197-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-129197-m02' is duplicated with machine name 'multinode-129197-m02' in profile 'multinode-129197'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-129197-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-129197-m03 --driver=docker  --container-runtime=containerd: (23.173472875s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-129197
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-129197: exit status 80 (277.263522ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-129197 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-129197-m03 already exists in multinode-129197-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-129197-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-129197-m03: (1.880202269s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.45s)

                                                
                                    
x
+
TestPreload (141.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-109542 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-109542 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m21.490579056s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-109542 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-109542 image pull gcr.io/k8s-minikube/busybox: (2.235051697s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-109542
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-109542: (5.801480257s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-109542 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0908 14:01:38.341690 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/addons-569758/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-109542 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.644533543s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-109542 image list
helpers_test.go:175: Cleaning up "test-preload-109542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-109542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-109542: (2.326702815s)
--- PASS: TestPreload (141.72s)

                                                
                                    
x
+
TestScheduledStopUnix (100.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-082919 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-082919 --memory=3072 --driver=docker  --container-runtime=containerd: (24.562444397s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082919 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-082919 -n scheduled-stop-082919
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082919 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 14:02:30.850688 1410772 retry.go:31] will retry after 126.843µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.851906 1410772 retry.go:31] will retry after 137.294µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.853118 1410772 retry.go:31] will retry after 292.691µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.854284 1410772 retry.go:31] will retry after 427.573µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.855451 1410772 retry.go:31] will retry after 417.144µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.856601 1410772 retry.go:31] will retry after 509.762µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.857747 1410772 retry.go:31] will retry after 933.52µs: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.858984 1410772 retry.go:31] will retry after 2.3435ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.862240 1410772 retry.go:31] will retry after 1.352433ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.864465 1410772 retry.go:31] will retry after 3.623569ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.868828 1410772 retry.go:31] will retry after 3.44868ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.873079 1410772 retry.go:31] will retry after 8.105623ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.881339 1410772 retry.go:31] will retry after 7.479844ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.889671 1410772 retry.go:31] will retry after 10.451725ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.900990 1410772 retry.go:31] will retry after 33.811965ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
I0908 14:02:30.935328 1410772 retry.go:31] will retry after 49.702063ms: open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/scheduled-stop-082919/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082919 --cancel-scheduled
E0908 14:02:38.875471 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082919 -n scheduled-stop-082919
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-082919
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082919 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-082919
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-082919: exit status 7 (74.476652ms)

                                                
                                                
-- stdout --
	scheduled-stop-082919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082919 -n scheduled-stop-082919
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082919 -n scheduled-stop-082919: exit status 7 (70.933406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-082919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-082919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-082919: (4.778823512s)
--- PASS: TestScheduledStopUnix (100.79s)

                                                
                                    
x
+
TestInsufficientStorage (10s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-658344 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-658344 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.595734394s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"884151af-01d4-40cb-aabb-651493d9060c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-658344] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"261192fc-6e89-427a-94ab-bbfa9ce1806b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"e69553c3-c0c2-48ad-a2e9-82368d34bed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c81a79f4-9a6e-4dfe-8359-967146ca5891","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig"}}
	{"specversion":"1.0","id":"fc603bca-5de6-4b2a-8c0a-d699921debb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube"}}
	{"specversion":"1.0","id":"3ec22213-ab99-4bb3-a7eb-f9c2f4fff431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2dfae2bd-17f6-41e6-8ce0-a0953ac9c013","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e7e6f479-5421-49e0-9b75-c242b0e286c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"37f11645-832e-48a7-81e4-f03f6cc44df9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c464f94c-8a35-4f06-bfdf-7aec3fd44b69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c07ec942-0d61-4792-b40e-e0f0adb7b30b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3377d718-0072-4fc9-a89c-0cf5ea03ad9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-658344\" primary control-plane node in \"insufficient-storage-658344\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"015cc074-5358-46f7-bf33-b41a80fb24ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4eac1cf0-dd04-457e-8c97-6f519e8aad26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"90699e00-7f20-4c89-a900-f526390963af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-658344 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-658344 --output=json --layout=cluster: exit status 7 (276.493771ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-658344","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-658344","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:03:54.503385 1597388 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-658344" does not appear in /home/jenkins/minikube-integration/21508-1407098/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-658344 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-658344 --output=json --layout=cluster: exit status 7 (279.724408ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-658344","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-658344","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:03:54.784417 1597486 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-658344" does not appear in /home/jenkins/minikube-integration/21508-1407098/kubeconfig
	E0908 14:03:54.795281 1597486 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/insufficient-storage-658344/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-658344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-658344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-658344: (1.844105648s)
--- PASS: TestInsufficientStorage (10.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.60689316 start -p running-upgrade-260181 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.60689316 start -p running-upgrade-260181 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (51.010099745s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-260181 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-260181 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.133895155s)
helpers_test.go:175: Cleaning up "running-upgrade-260181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-260181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-260181: (4.717639135s)
--- PASS: TestRunningBinaryUpgrade (79.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.453794446s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-168199
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-168199: (1.87584716s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-168199 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-168199 status --format={{.Host}}: exit status 7 (92.566658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m48.051304339s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-168199 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (95.582422ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168199] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-168199
	    minikube start -p kubernetes-upgrade-168199 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1681992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-168199 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168199 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.402305528s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-168199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-168199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-168199: (3.940988752s)
--- PASS: TestKubernetesUpgrade (158.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (95.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3563841124 start -p missing-upgrade-052760 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3563841124 start -p missing-upgrade-052760 --memory=3072 --driver=docker  --container-runtime=containerd: (28.856871077s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-052760
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-052760
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-052760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-052760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.894536668s)
helpers_test.go:175: Cleaning up "missing-upgrade-052760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-052760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-052760: (1.910929525s)
--- PASS: TestMissingContainerUpgrade (95.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestPause/serial/Start (66.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-968712 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-968712 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m6.228623773s)
--- PASS: TestPause/serial/Start (66.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.900151327 start -p stopped-upgrade-041726 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E0908 14:04:01.943236 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.900151327 start -p stopped-upgrade-041726 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (46.671981361s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.900151327 -p stopped-upgrade-041726 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.900151327 -p stopped-upgrade-041726 stop: (1.277017376s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-041726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-041726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.030280936s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-968712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-968712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.118429735s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.13s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-968712 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-968712 --alsologtostderr -v=5: (1.008807299s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-968712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-968712 --output=json --layout=cluster: exit status 2 (421.495644ms)

                                                
                                                
-- stdout --
	{"Name":"pause-968712","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-968712","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-968712 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-968712 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-968712 --alsologtostderr -v=5: (1.047886785s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-968712 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-968712 --alsologtostderr -v=5: (3.396653516s)
--- PASS: TestPause/serial/DeletePaused (3.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-041726
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-041726: (1.620457449s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-968712
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-968712: exit status 1 (19.924239ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-968712: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.768128ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-720775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-720775 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-720775 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.302482858s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-720775 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.614601992s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-720775 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-720775 status -o json: exit status 2 (291.997003ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-720775","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-720775
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-720775: (1.902134542s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-964891 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-964891 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (182.404125ms)

                                                
                                                
-- stdout --
	* [false-964891] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:05:52.169896 1631063 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:05:52.170042 1631063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:05:52.170055 1631063 out.go:374] Setting ErrFile to fd 2...
	I0908 14:05:52.170059 1631063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:05:52.170267 1631063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-1407098/.minikube/bin
	I0908 14:05:52.170957 1631063 out.go:368] Setting JSON to false
	I0908 14:05:52.172091 1631063 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13696,"bootTime":1757326656,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:05:52.172162 1631063 start.go:140] virtualization: kvm guest
	I0908 14:05:52.174787 1631063 out.go:179] * [false-964891] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:05:52.176567 1631063 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:05:52.176626 1631063 notify.go:220] Checking for updates...
	I0908 14:05:52.180185 1631063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:05:52.181711 1631063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-1407098/kubeconfig
	I0908 14:05:52.182934 1631063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-1407098/.minikube
	I0908 14:05:52.184214 1631063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:05:52.185480 1631063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:05:52.187133 1631063 config.go:182] Loaded profile config "NoKubernetes-720775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0908 14:05:52.187269 1631063 config.go:182] Loaded profile config "kubernetes-upgrade-168199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:05:52.187363 1631063 config.go:182] Loaded profile config "missing-upgrade-052760": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I0908 14:05:52.187477 1631063 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:05:52.216420 1631063 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:05:52.216579 1631063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:05:52.284128 1631063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2025-09-08 14:05:52.271377365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:05:52.284248 1631063 docker.go:318] overlay module found
	I0908 14:05:52.286445 1631063 out.go:179] * Using the docker driver based on user configuration
	I0908 14:05:52.288116 1631063 start.go:304] selected driver: docker
	I0908 14:05:52.288136 1631063 start.go:918] validating driver "docker" against <nil>
	I0908 14:05:52.288151 1631063 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:05:52.290263 1631063 out.go:203] 
	W0908 14:05:52.291676 1631063 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0908 14:05:52.293235 1631063 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-964891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-720775
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-052760
contexts:
- context:
cluster: NoKubernetes-720775
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-720775
name: NoKubernetes-720775
- context:
cluster: missing-upgrade-052760
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-052760
name: missing-upgrade-052760
current-context: NoKubernetes-720775
kind: Config
preferences: {}
users:
- name: NoKubernetes-720775
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.key
- name: missing-upgrade-052760
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-964891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-964891"

                                                
                                                
----------------------- debugLogs end: false-964891 [took: 5.524374315s] --------------------------------
helpers_test.go:175: Cleaning up "false-964891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-964891
--- PASS: TestNetworkPlugins/group/false (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-720775 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.610753993s)
--- PASS: TestNoKubernetes/serial/Start (4.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-720775 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-720775 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.44646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (33.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (17.499457513s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.563041565s)
--- PASS: TestNoKubernetes/serial/ProfileList (33.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-720775
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-720775: (1.207715216s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-720775 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-720775 --driver=docker  --container-runtime=containerd: (6.174164371s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-720775 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-720775 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.336348ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (67.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-582872 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-582872 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m7.171646707s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (67.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-002511 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 14:07:38.875534 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/functional-780336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-002511 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m8.577988132s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-197025 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-197025 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (54.742538049s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-582872 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [77a6b534-98ad-4164-a565-1a4dd19e2d97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [77a6b534-98ad-4164-a565-1a4dd19e2d97] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.006103316s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-582872 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-582872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-582872 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-582872 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-582872 --alsologtostderr -v=3: (11.953983064s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-002511 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e995232-6aa6-4da1-a43a-8e105b1c7f93] Pending
helpers_test.go:352: "busybox" [8e995232-6aa6-4da1-a43a-8e105b1c7f93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e995232-6aa6-4da1-a43a-8e105b1c7f93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004317401s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-002511 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582872 -n old-k8s-version-582872
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582872 -n old-k8s-version-582872: exit status 7 (86.124697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-582872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-582872 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-582872 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.123480169s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582872 -n old-k8s-version-582872
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-197025 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c7b123b2-3f77-4bfe-8439-a6b03e2b5b90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c7b123b2-3f77-4bfe-8439-a6b03e2b5b90] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004403112s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-197025 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-002511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-002511 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-002511 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-002511 --alsologtostderr -v=3: (12.068612452s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-197025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-197025 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-197025 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-197025 --alsologtostderr -v=3: (12.084696152s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-002511 -n no-preload-002511
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-002511 -n no-preload-002511: exit status 7 (80.207411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-002511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-002511 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-002511 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.574410796s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-002511 -n no-preload-002511
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-197025 -n embed-certs-197025
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-197025 -n embed-certs-197025: exit status 7 (88.190673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-197025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-197025 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-197025 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (51.278442186s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-197025 -n embed-certs-197025
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-288682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-288682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (46.494452049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8gvlx" [e364ae7d-fb64-4d31-9e89-a9e1de5bd690] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003979671s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8gvlx" [e364ae7d-fb64-4d31-9e89-a9e1de5bd690] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004194029s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-582872 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-582872 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-582872 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582872 -n old-k8s-version-582872
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582872 -n old-k8s-version-582872: exit status 2 (318.315233ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-582872 -n old-k8s-version-582872
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-582872 -n old-k8s-version-582872: exit status 2 (311.423651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-582872 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582872 -n old-k8s-version-582872
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-582872 -n old-k8s-version-582872
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2n2vs" [d8861b6b-4601-4682-b43b-d62ef93cea66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005354745s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268101 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268101 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (29.785222315s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2n2vs" [d8861b6b-4601-4682-b43b-d62ef93cea66] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004908538s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-002511 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gbnvv" [d82b39bb-88e0-46a8-a814-2ab341181039] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004496964s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-002511 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-002511 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-002511 -n no-preload-002511
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-002511 -n no-preload-002511: exit status 2 (338.286373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-002511 -n no-preload-002511
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-002511 -n no-preload-002511: exit status 2 (338.43277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-002511 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-002511 -n no-preload-002511
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-002511 -n no-preload-002511
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gbnvv" [d82b39bb-88e0-46a8-a814-2ab341181039] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004566161s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-197025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (48.593365309s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-197025 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-197025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-197025 -n embed-certs-197025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-197025 -n embed-certs-197025: exit status 2 (388.84954ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-197025 -n embed-certs-197025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-197025 -n embed-certs-197025: exit status 2 (359.007534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-197025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-197025 --alsologtostderr -v=1: (1.068744447s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-197025 -n embed-certs-197025
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-197025 -n embed-certs-197025
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (54.272117116s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-288682 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6f9c30c4-207a-4bb1-970a-9778022a457f] Pending
helpers_test.go:352: "busybox" [6f9c30c4-207a-4bb1-970a-9778022a457f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6f9c30c4-207a-4bb1-970a-9778022a457f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003371363s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-288682 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.267829672s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-268101 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-268101 --alsologtostderr -v=3: (1.223680445s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268101 -n newest-cni-268101
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268101 -n newest-cni-268101: exit status 7 (86.64119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-268101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268101 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268101 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (15.531441195s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268101 -n newest-cni-268101
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-288682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-288682 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-288682 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-288682 --alsologtostderr -v=3: (12.088382131s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-268101 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-268101 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268101 -n newest-cni-268101
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268101 -n newest-cni-268101: exit status 2 (323.633413ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268101 -n newest-cni-268101
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268101 -n newest-cni-268101: exit status 2 (341.141467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-268101 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-268101 --alsologtostderr -v=1: (1.031906716s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268101 -n newest-cni-268101
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268101 -n newest-cni-268101
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682: exit status 7 (84.297329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-288682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-288682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-288682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (49.004497526s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-964891 "pgrep -a kubelet"
I0908 14:11:02.861801 1410772 config.go:182] Loaded profile config "auto-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qnnmg" [ed20e01e-9518-4d87-bcac-bbcdd058c5f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qnnmg" [ed20e01e-9518-4d87-bcac-bbcdd058c5f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004842208s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mj4x5" [307b923d-c9a4-4ba1-a572-a41f343d8cb0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003977463s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-964891 "pgrep -a kubelet"
I0908 14:11:22.798669 1410772 config.go:182] Loaded profile config "kindnet-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hh6rn" [06a67aa9-bb17-4831-a0ca-301be358d4c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hh6rn" [06a67aa9-bb17-4831-a0ca-301be358d4c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004286219s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.949993125s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bkbdg" [2f5ab99f-97be-4557-80e1-a357eb500494] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003586013s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bkbdg" [2f5ab99f-97be-4557-80e1-a357eb500494] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003280347s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-288682 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-288682 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-288682 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682: exit status 2 (364.514169ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682: exit status 2 (429.129467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-288682 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-288682 -n default-k8s-diff-port-288682
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)
E0908 14:13:23.419357 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.425873 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.437363 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.458890 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.500373 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.581839 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:23.743447 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:24.065202 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:24.707392 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:25.989604 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.551272 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:33.673227 1410772 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/old-k8s-version-582872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (38.085041685s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (2m2.227810117s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-964891 "pgrep -a kubelet"
I0908 14:12:24.542344 1410772 config.go:182] Loaded profile config "custom-flannel-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-285z2" [53c936ef-fb78-42ec-91f9-92d8915354f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-285z2" [53c936ef-fb78-42ec-91f9-92d8915354f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004082503s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-964891 "pgrep -a kubelet"
I0908 14:12:29.966934 1410772 config.go:182] Loaded profile config "enable-default-cni-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zp788" [b88d4714-95e0-4c02-8ac6-9e1c5f5caa98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zp788" [b88d4714-95e0-4c02-8ac6-9e1c5f5caa98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004525609s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-964891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (40.154877948s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-964891 "pgrep -a kubelet"
I0908 14:13:34.333430 1410772 config.go:182] Loaded profile config "bridge-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mnw8d" [41ea1d4b-6377-4d64-bee8-cd29d7ca865e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mnw8d" [41ea1d4b-6377-4d64-bee8-cd29d7ca865e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003710556s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tknpg" [f86c5794-349a-464c-9e68-bc32d5936dd9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004056511s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-964891 "pgrep -a kubelet"
I0908 14:14:07.087993 1410772 config.go:182] Loaded profile config "flannel-964891": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-964891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rfj2v" [6d654a16-218c-4388-b8ea-d87d948a6c7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rfj2v" [6d654a16-218c-4388-b8ea-d87d948a6c7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003736631s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-964891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-964891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    

Test skip (25/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-263059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-263059
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-964891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-720775
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-052760
contexts:
- context:
cluster: NoKubernetes-720775
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-720775
name: NoKubernetes-720775
- context:
cluster: missing-upgrade-052760
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-052760
name: missing-upgrade-052760
current-context: NoKubernetes-720775
kind: Config
preferences: {}
users:
- name: NoKubernetes-720775
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.key
- name: missing-upgrade-052760
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-964891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-964891"

                                                
                                                
----------------------- debugLogs end: kubenet-964891 [took: 3.362141587s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-964891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-964891
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-964891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-964891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-720775
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:06:00 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-168199
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-1407098/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-052760
contexts:
- context:
cluster: NoKubernetes-720775
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-720775
name: NoKubernetes-720775
- context:
cluster: kubernetes-upgrade-168199
user: kubernetes-upgrade-168199
name: kubernetes-upgrade-168199
- context:
cluster: missing-upgrade-052760
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:05:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-052760
name: missing-upgrade-052760
current-context: kubernetes-upgrade-168199
kind: Config
preferences: {}
users:
- name: NoKubernetes-720775
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/NoKubernetes-720775/client.key
- name: kubernetes-upgrade-168199
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/kubernetes-upgrade-168199/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/kubernetes-upgrade-168199/client.key
- name: missing-upgrade-052760
user:
client-certificate: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.crt
client-key: /home/jenkins/minikube-integration/21508-1407098/.minikube/profiles/missing-upgrade-052760/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-964891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-964891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-964891"

                                                
                                                
----------------------- debugLogs end: cilium-964891 [took: 5.880548412s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-964891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-964891
--- SKIP: TestNetworkPlugins/group/cilium (6.10s)

                                                
                                    
Copied to clipboard