Test Report: Docker_Linux_docker_arm64 19370

                    
                      dd51e72d60a15da3a1a4a8c267729efa6313a896:2024-08-06:35671
                    
                

Test fail (1/351)

Order failed test Duration
269 TestKubernetesUpgrade 355.02
x
+
TestKubernetesUpgrade (355.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0806 07:53:36.074111  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:53:43.565608  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.570866  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.581132  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.601417  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.641753  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.721996  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:43.882301  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:44.202541  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:44.843092  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:46.123486  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:48.684656  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:53:53.805480  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:54:04.046542  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:54:24.527601  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.453940067s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-473733
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-473733: (1.222687804s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-473733 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-473733 status --format={{.Host}}: exit status 7 (71.988124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0806 07:54:32.074892  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:55:05.489557  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 07:55:33.028999  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.376833255s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-473733 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (80.232324ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-473733] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-473733
	    minikube start -p kubernetes-upgrade-473733 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4737332 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-473733 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0806 07:59:11.252052  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 90 (16.368100926s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-473733] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-473733" primary control-plane node in "kubernetes-upgrade-473733" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Updating the running docker "kubernetes-upgrade-473733" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:59:08.315500 1167706 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:59:08.315623 1167706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:59:08.315635 1167706 out.go:304] Setting ErrFile to fd 2...
	I0806 07:59:08.315640 1167706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:59:08.315895 1167706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:59:08.316247 1167706 out.go:298] Setting JSON to false
	I0806 07:59:08.317549 1167706 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20492,"bootTime":1722910656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:59:08.317660 1167706 start.go:139] virtualization:  
	I0806 07:59:08.320567 1167706 out.go:177] * [kubernetes-upgrade-473733] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0806 07:59:08.322611 1167706 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:59:08.322774 1167706 notify.go:220] Checking for updates...
	I0806 07:59:08.327214 1167706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:59:08.329196 1167706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:59:08.330984 1167706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:59:08.332564 1167706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0806 07:59:08.334418 1167706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:59:08.337573 1167706 config.go:182] Loaded profile config "kubernetes-upgrade-473733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 07:59:08.338101 1167706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:59:08.366239 1167706 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:59:08.366374 1167706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:59:08.428424 1167706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-06 07:59:08.418822948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:59:08.428538 1167706 docker.go:307] overlay module found
	I0806 07:59:08.430308 1167706 out.go:177] * Using the docker driver based on existing profile
	I0806 07:59:08.432209 1167706 start.go:297] selected driver: docker
	I0806 07:59:08.432221 1167706 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-473733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-473733 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:59:08.432338 1167706 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:59:08.433018 1167706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:59:08.489654 1167706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-06 07:59:08.479520689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:59:08.490037 1167706 cni.go:84] Creating CNI manager for ""
	I0806 07:59:08.490062 1167706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 07:59:08.490120 1167706 start.go:340] cluster config:
	{Name:kubernetes-upgrade-473733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-473733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:59:08.493173 1167706 out.go:177] * Starting "kubernetes-upgrade-473733" primary control-plane node in "kubernetes-upgrade-473733" cluster
	I0806 07:59:08.495208 1167706 cache.go:121] Beginning downloading kic base image for docker with docker
	I0806 07:59:08.496915 1167706 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0806 07:59:08.498554 1167706 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 07:59:08.498612 1167706 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 07:59:08.498617 1167706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0806 07:59:08.498621 1167706 cache.go:56] Caching tarball of preloaded images
	I0806 07:59:08.498844 1167706 preload.go:172] Found /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 07:59:08.498856 1167706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 07:59:08.498986 1167706 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubernetes-upgrade-473733/config.json ...
	W0806 07:59:08.529357 1167706 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0806 07:59:08.529379 1167706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0806 07:59:08.529477 1167706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0806 07:59:08.529500 1167706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0806 07:59:08.529512 1167706 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0806 07:59:08.529521 1167706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0806 07:59:08.529527 1167706 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0806 07:59:08.666649 1167706 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0806 07:59:08.666692 1167706 cache.go:194] Successfully downloaded all kic artifacts
	I0806 07:59:08.666722 1167706 start.go:360] acquireMachinesLock for kubernetes-upgrade-473733: {Name:mke43a03255dc09d0ea9df96a9b8641c9cefe583 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:59:08.666790 1167706 start.go:364] duration metric: took 42.608µs to acquireMachinesLock for "kubernetes-upgrade-473733"
	I0806 07:59:08.666817 1167706 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:59:08.666846 1167706 fix.go:54] fixHost starting: 
	I0806 07:59:08.667139 1167706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-473733 --format={{.State.Status}}
	I0806 07:59:08.683714 1167706 fix.go:112] recreateIfNeeded on kubernetes-upgrade-473733: state=Running err=<nil>
	W0806 07:59:08.683743 1167706 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:59:08.686976 1167706 out.go:177] * Updating the running docker "kubernetes-upgrade-473733" container ...
	I0806 07:59:08.690109 1167706 machine.go:94] provisionDockerMachine start ...
	I0806 07:59:08.690203 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:08.706573 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:08.706858 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:08.706875 1167706 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:59:08.847356 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-473733
	
	I0806 07:59:08.847389 1167706 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-473733"
	I0806 07:59:08.847585 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:08.866298 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:08.866551 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:08.866626 1167706 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-473733 && echo "kubernetes-upgrade-473733" | sudo tee /etc/hostname
	I0806 07:59:09.020476 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-473733
	
	I0806 07:59:09.020568 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:09.041409 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:09.041685 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:09.041708 1167706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-473733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-473733/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-473733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:59:09.179429 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:59:09.179551 1167706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19370-879111/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-879111/.minikube}
	I0806 07:59:09.179574 1167706 ubuntu.go:177] setting up certificates
	I0806 07:59:09.179597 1167706 provision.go:84] configureAuth start
	I0806 07:59:09.179656 1167706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-473733
	I0806 07:59:09.197653 1167706 provision.go:143] copyHostCerts
	I0806 07:59:09.197735 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem, removing ...
	I0806 07:59:09.197750 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem
	I0806 07:59:09.197834 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem (1082 bytes)
	I0806 07:59:09.197946 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem, removing ...
	I0806 07:59:09.197958 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem
	I0806 07:59:09.197987 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem (1123 bytes)
	I0806 07:59:09.198071 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem, removing ...
	I0806 07:59:09.198082 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem
	I0806 07:59:09.198110 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem (1679 bytes)
	I0806 07:59:09.198177 1167706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-473733 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-473733 localhost minikube]
	I0806 07:59:10.656343 1167706 provision.go:177] copyRemoteCerts
	I0806 07:59:10.656450 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:59:10.656514 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:10.680273 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:10.778029 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 07:59:10.849122 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 07:59:10.984149 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 07:59:11.080952 1167706 provision.go:87] duration metric: took 1.901339623s to configureAuth
	I0806 07:59:11.080986 1167706 ubuntu.go:193] setting minikube options for container-runtime
	I0806 07:59:11.081221 1167706 config.go:182] Loaded profile config "kubernetes-upgrade-473733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 07:59:11.081297 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.118142 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.118430 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.118449 1167706 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 07:59:11.449582 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0806 07:59:11.449600 1167706 ubuntu.go:71] root file system type: overlay
	I0806 07:59:11.449715 1167706 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 07:59:11.449778 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.487491 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.487828 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.487916 1167706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 07:59:11.785735 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 07:59:11.785825 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.813099 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.813348 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.813372 1167706 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 07:59:12.059353 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:59:12.059385 1167706 machine.go:97] duration metric: took 3.369256653s to provisionDockerMachine
	I0806 07:59:12.059398 1167706 start.go:293] postStartSetup for "kubernetes-upgrade-473733" (driver="docker")
	I0806 07:59:12.059412 1167706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:59:12.059521 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:59:12.059573 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.093044 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.242143 1167706 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:59:12.259383 1167706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0806 07:59:12.259560 1167706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0806 07:59:12.259594 1167706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0806 07:59:12.259617 1167706 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0806 07:59:12.259644 1167706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/addons for local assets ...
	I0806 07:59:12.259725 1167706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/files for local assets ...
	I0806 07:59:12.259862 1167706 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem -> 8844952.pem in /etc/ssl/certs
	I0806 07:59:12.260023 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:59:12.274383 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem --> /etc/ssl/certs/8844952.pem (1708 bytes)
	I0806 07:59:12.379668 1167706 start.go:296] duration metric: took 320.254182ms for postStartSetup
	I0806 07:59:12.379752 1167706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:59:12.379795 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.413127 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.556274 1167706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0806 07:59:12.576343 1167706 fix.go:56] duration metric: took 3.909511968s for fixHost
	I0806 07:59:12.576366 1167706 start.go:83] releasing machines lock for "kubernetes-upgrade-473733", held for 3.909562657s
	I0806 07:59:12.576442 1167706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-473733
	I0806 07:59:12.603268 1167706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:59:12.603348 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.603565 1167706 ssh_runner.go:195] Run: cat /version.json
	I0806 07:59:12.603608 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.650249 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.651834 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:13.115144 1167706 ssh_runner.go:195] Run: systemctl --version
	I0806 07:59:13.127694 1167706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 07:59:13.148984 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0806 07:59:13.182038 1167706 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0806 07:59:13.182127 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0806 07:59:13.205127 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0806 07:59:13.231787 1167706 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:59:13.231828 1167706 start.go:495] detecting cgroup driver to use...
	I0806 07:59:13.231868 1167706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0806 07:59:13.231969 1167706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:59:13.255844 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0806 07:59:13.269902 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 07:59:13.291391 1167706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 07:59:13.291503 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 07:59:13.315824 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 07:59:13.342863 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 07:59:13.353796 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 07:59:13.364876 1167706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:59:13.374708 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 07:59:13.405203 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 07:59:13.464012 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 07:59:13.479141 1167706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:59:13.493542 1167706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:59:13.521007 1167706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:13.688230 1167706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 07:59:24.049169 1167706 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.360901477s)
	I0806 07:59:24.049194 1167706 start.go:495] detecting cgroup driver to use...
	I0806 07:59:24.049230 1167706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0806 07:59:24.049279 1167706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 07:59:24.066684 1167706 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0806 07:59:24.066759 1167706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 07:59:24.083609 1167706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:59:24.108060 1167706 ssh_runner.go:195] Run: which cri-dockerd
	I0806 07:59:24.113833 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 07:59:24.124056 1167706 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0806 07:59:24.143698 1167706 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 07:59:24.253351 1167706 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 07:59:24.398172 1167706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 07:59:24.398317 1167706 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 07:59:24.427431 1167706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:24.530481 1167706 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 07:59:24.594943 1167706 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 07:59:24.618953 1167706 out.go:177] 
	W0806 07:59:24.620678 1167706 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:54:28 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:28 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:28.982784895Z" level=info msg="Starting up"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.007226669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.020947661Z" level=info msg="Loading containers: start."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.195087421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.250781871Z" level=info msg="Loading containers: done."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264220646Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264314641Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:29 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295562911Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295738177Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.498289159Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500277453Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500743404Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.661126858Z" level=info msg="Starting up"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.690669608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.713507679Z" level=info msg="Loading containers: start."
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.939391089Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.018362189Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032305116Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032388314Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.037117672Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067478791Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067596335Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070495344Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070733238Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.118039265Z" level=info msg="Starting up"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.140415283Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.153552838Z" level=info msg="Loading containers: start."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.343031110Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.384135707Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396493547Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396572659Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426045044Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426275324Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.793201483Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795765936Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795964479Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.838051141Z" level=info msg="Starting up"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.862812366Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.549566992Z" level=info msg="Loading containers: start."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.716440912Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.758282476Z" level=info msg="Loading containers: done."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770203715Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770280103Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801262335Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:42 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801395592Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:55:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:08.184139618Z" level=info msg="ignoring event" container=ccef9156af7fbe1b796b9040a14de52f493aa699297a7e3441b434daa3ecff6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.205176343Z" level=info msg="ignoring event" container=c89cbaed1e927c2f1fbe5e70ec42ece0068d04124c670a04d86ecdead9e9b2a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.940113710Z" level=info msg="ignoring event" container=4f17de248cc65e948965d131a79e0defc245b509639d5edeeb61c08aeafb6224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:41.976492443Z" level=info msg="ignoring event" container=8b0b371e338782aad76f095da8b5efd54b9a034c3b696f72480414bd61f4cd77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:09 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:09.140504054Z" level=info msg="ignoring event" container=33dfc14da1883dd53617ccba2bc2f659b27f78b897542363916d13e3777ed1df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:20 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:20.149573580Z" level=info msg="ignoring event" container=e99d1ebfc9b156453d9df9f4686fb6f09ffe0a06a3766831235ae239a586a1e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:57 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:57.932084317Z" level=info msg="ignoring event" container=ba46a40760deccd8d2b87f7fe8630e9ad8f789c11604173d87c14f48cd62e636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:57:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:57:08.956266666Z" level=info msg="ignoring event" container=07c9e0b5c709a767e4d5f74acd1997c7334664792e54fffa4667322fe0c8e3bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:01 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:01.361468687Z" level=info msg="ignoring event" container=8bc392bb375d02a89afb76b15fd6a5e6f338b449e4a75c4ecbea357f92e478cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:12 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:12.381489817Z" level=info msg="ignoring event" container=51e6f5c664348ebf4bb1e12517cbc2876ba623a7f5ebcb819db59b69e064a448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.143146114Z" level=info msg="ignoring event" container=d7c1110fc7744f8cef459ac9b29e7af212968c0eca608f514cb6242ddd07a24b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.227014502Z" level=info msg="ignoring event" container=9cba2c1eabd39d80695c125e39103faa089fd1f520e51db4c001d326d776fd36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.328048224Z" level=info msg="ignoring event" container=e562f6cf886c63356d0af139642755fe27be0fa42750b81a00842a376dd2ca65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.432736039Z" level=info msg="ignoring event" container=405026d69170c13883f94d5c3d01fed25cd23d2f548ec10f46e0bab497f659b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.708098018Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.970582442Z" level=info msg="ignoring event" container=191a74b1a92663f8ad4852f0cd67b90c025b6dc58677a4b04d38f511b9252c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.980267195Z" level=info msg="ignoring event" container=d96c662f207ff4edd3b075a28594efe9f3cc4bccefac9b980255110001a6e190 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.024092135Z" level=info msg="ignoring event" container=5df65826498ffa5bfe199a2abdaba2304dee22a66d7ae26d7ec6ac6471476580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.035079956Z" level=info msg="ignoring event" container=93f38ef14ca1a6a715f56d870417edec44229ceff62bef18f1537ec66db49ce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056417707Z" level=info msg="ignoring event" container=167edd45891d680e8fea99d07a8ea8dcb471ca89ae77b71829a04114c186e923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056471728Z" level=info msg="ignoring event" container=6df58aa48c0aa0fcc200394dd6c1d56e27f817b0b692154a1e39d516e9132d6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056500388Z" level=info msg="ignoring event" container=9d7a14e7d2ed0e05c5e4230c4b7d5f33937b37b530527824914ebc7488e56c86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.128501684Z" level=info msg="ignoring event" container=fbb0b29c0b760be2641278ccecf58ef7d405b4bae98fbdeaaddac73e9993c8f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189072560Z" level=info msg="ignoring event" container=e14a0717ba7e5abb2556615ca03bce0368500b10787976e13fcf66f07954d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189125285Z" level=info msg="ignoring event" container=931d0cf38f4164152362ac73611947dfb7b818d1787c6917761ee38cf6455430 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189154026Z" level=info msg="ignoring event" container=7de233655ab48863c8775d476f4105f295a01822e0f7470e3fa5998fee116149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189171905Z" level=info msg="ignoring event" container=a78e3de62a6109945ba7fdbd4320addd16c2d168430910f6400d45e454f8c252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189194477Z" level=info msg="ignoring event" container=6dae9807d021f5b611b16ef76724238d00a5f13b1d0ec7aef55d007d153f5d92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.173418238Z" level=info msg="ignoring event" container=dabd770834038015a989eb6cc43f3b3791df209e3a416da0ec1bf79149bbfc0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.176274994Z" level=info msg="ignoring event" container=d248a4aa9cdaa7ae7b9378771b93e46c2195189f67e90c78e3dd00da17d11d12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.833199755Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.873403418Z" level=info msg="ignoring event" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.916239472Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.917246457Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10551]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10592]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10637]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:54:28 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:28 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:28.982784895Z" level=info msg="Starting up"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.007226669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.020947661Z" level=info msg="Loading containers: start."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.195087421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.250781871Z" level=info msg="Loading containers: done."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264220646Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264314641Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:29 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295562911Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295738177Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.498289159Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500277453Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500743404Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.661126858Z" level=info msg="Starting up"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.690669608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.713507679Z" level=info msg="Loading containers: start."
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.939391089Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.018362189Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032305116Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032388314Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.037117672Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067478791Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067596335Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070495344Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070733238Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.118039265Z" level=info msg="Starting up"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.140415283Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.153552838Z" level=info msg="Loading containers: start."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.343031110Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.384135707Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396493547Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396572659Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426045044Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426275324Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.793201483Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795765936Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795964479Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.838051141Z" level=info msg="Starting up"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.862812366Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.549566992Z" level=info msg="Loading containers: start."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.716440912Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.758282476Z" level=info msg="Loading containers: done."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770203715Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770280103Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801262335Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:42 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801395592Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:55:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:08.184139618Z" level=info msg="ignoring event" container=ccef9156af7fbe1b796b9040a14de52f493aa699297a7e3441b434daa3ecff6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.205176343Z" level=info msg="ignoring event" container=c89cbaed1e927c2f1fbe5e70ec42ece0068d04124c670a04d86ecdead9e9b2a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.940113710Z" level=info msg="ignoring event" container=4f17de248cc65e948965d131a79e0defc245b509639d5edeeb61c08aeafb6224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:41.976492443Z" level=info msg="ignoring event" container=8b0b371e338782aad76f095da8b5efd54b9a034c3b696f72480414bd61f4cd77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:09 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:09.140504054Z" level=info msg="ignoring event" container=33dfc14da1883dd53617ccba2bc2f659b27f78b897542363916d13e3777ed1df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:20 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:20.149573580Z" level=info msg="ignoring event" container=e99d1ebfc9b156453d9df9f4686fb6f09ffe0a06a3766831235ae239a586a1e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:57 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:57.932084317Z" level=info msg="ignoring event" container=ba46a40760deccd8d2b87f7fe8630e9ad8f789c11604173d87c14f48cd62e636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:57:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:57:08.956266666Z" level=info msg="ignoring event" container=07c9e0b5c709a767e4d5f74acd1997c7334664792e54fffa4667322fe0c8e3bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:01 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:01.361468687Z" level=info msg="ignoring event" container=8bc392bb375d02a89afb76b15fd6a5e6f338b449e4a75c4ecbea357f92e478cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:12 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:12.381489817Z" level=info msg="ignoring event" container=51e6f5c664348ebf4bb1e12517cbc2876ba623a7f5ebcb819db59b69e064a448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.143146114Z" level=info msg="ignoring event" container=d7c1110fc7744f8cef459ac9b29e7af212968c0eca608f514cb6242ddd07a24b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.227014502Z" level=info msg="ignoring event" container=9cba2c1eabd39d80695c125e39103faa089fd1f520e51db4c001d326d776fd36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.328048224Z" level=info msg="ignoring event" container=e562f6cf886c63356d0af139642755fe27be0fa42750b81a00842a376dd2ca65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.432736039Z" level=info msg="ignoring event" container=405026d69170c13883f94d5c3d01fed25cd23d2f548ec10f46e0bab497f659b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.708098018Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.970582442Z" level=info msg="ignoring event" container=191a74b1a92663f8ad4852f0cd67b90c025b6dc58677a4b04d38f511b9252c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.980267195Z" level=info msg="ignoring event" container=d96c662f207ff4edd3b075a28594efe9f3cc4bccefac9b980255110001a6e190 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.024092135Z" level=info msg="ignoring event" container=5df65826498ffa5bfe199a2abdaba2304dee22a66d7ae26d7ec6ac6471476580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.035079956Z" level=info msg="ignoring event" container=93f38ef14ca1a6a715f56d870417edec44229ceff62bef18f1537ec66db49ce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056417707Z" level=info msg="ignoring event" container=167edd45891d680e8fea99d07a8ea8dcb471ca89ae77b71829a04114c186e923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056471728Z" level=info msg="ignoring event" container=6df58aa48c0aa0fcc200394dd6c1d56e27f817b0b692154a1e39d516e9132d6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056500388Z" level=info msg="ignoring event" container=9d7a14e7d2ed0e05c5e4230c4b7d5f33937b37b530527824914ebc7488e56c86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.128501684Z" level=info msg="ignoring event" container=fbb0b29c0b760be2641278ccecf58ef7d405b4bae98fbdeaaddac73e9993c8f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189072560Z" level=info msg="ignoring event" container=e14a0717ba7e5abb2556615ca03bce0368500b10787976e13fcf66f07954d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189125285Z" level=info msg="ignoring event" container=931d0cf38f4164152362ac73611947dfb7b818d1787c6917761ee38cf6455430 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189154026Z" level=info msg="ignoring event" container=7de233655ab48863c8775d476f4105f295a01822e0f7470e3fa5998fee116149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189171905Z" level=info msg="ignoring event" container=a78e3de62a6109945ba7fdbd4320addd16c2d168430910f6400d45e454f8c252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189194477Z" level=info msg="ignoring event" container=6dae9807d021f5b611b16ef76724238d00a5f13b1d0ec7aef55d007d153f5d92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.173418238Z" level=info msg="ignoring event" container=dabd770834038015a989eb6cc43f3b3791df209e3a416da0ec1bf79149bbfc0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.176274994Z" level=info msg="ignoring event" container=d248a4aa9cdaa7ae7b9378771b93e46c2195189f67e90c78e3dd00da17d11d12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.833199755Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.873403418Z" level=info msg="ignoring event" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.916239472Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.917246457Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10551]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10592]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10637]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 07:59:24.620777 1167706 out.go:239] * 
	* 
	W0806 07:59:24.621745 1167706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 07:59:24.624172 1167706 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-arm64 start -p kubernetes-upgrade-473733 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-06 07:59:24.659229704 +0000 UTC m=+3242.742950410
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-473733
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-473733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd",
	        "Created": "2024-08-06T07:53:41.128423125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1146433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-06T07:54:28.244318553Z",
	            "FinishedAt": "2024-08-06T07:54:27.394700039Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd/hosts",
	        "LogPath": "/var/lib/docker/containers/adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd/adb6396f63d00e79544f7be7801639ca687155bfa6452a58acac9d7ac38393fd-json.log",
	        "Name": "/kubernetes-upgrade-473733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-473733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-473733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1752db5cd6bc0d10998ff6e5a468251baadece13458a1e90c80402d8faca1140-init/diff:/var/lib/docker/overlay2/f17d1d656e77305d60be732f121cb31b7a91566dc22e52a88b85037649f8d795/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1752db5cd6bc0d10998ff6e5a468251baadece13458a1e90c80402d8faca1140/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1752db5cd6bc0d10998ff6e5a468251baadece13458a1e90c80402d8faca1140/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1752db5cd6bc0d10998ff6e5a468251baadece13458a1e90c80402d8faca1140/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-473733",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-473733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-473733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-473733",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-473733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e24487db04dd540788aed8ed865bda95e1128c40f21ddac75565812906bae4b",
	            "SandboxKey": "/var/run/docker/netns/4e24487db04d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-473733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dc97d7ce4c3b039aeb5a2c355843cfa1cc0a9bd37ec93e0bba3ecc60b7ce4b8b",
	                    "EndpointID": "f6de456e2ebc18451126cb3c59896534e0c47e574419be25448e5cd352f60180",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-473733",
	                        "adb6396f63d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-473733 -n kubernetes-upgrade-473733
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-473733 -n kubernetes-upgrade-473733: exit status 2 (328.759018ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-473733 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-101614          | force-systemd-flag-101614 | jenkins | v1.33.1 | 06 Aug 24 07:51 UTC | 06 Aug 24 07:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-609694              | force-systemd-env-609694  | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-609694           | force-systemd-env-609694  | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	| ssh     | force-systemd-flag-101614             | force-systemd-flag-101614 | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-101614          | force-systemd-flag-101614 | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	| start   | -p docker-flags-968082                | docker-flags-968082       | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-834751             | cert-expiration-834751    | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | docker-flags-968082 ssh               | docker-flags-968082       | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-968082 ssh               | docker-flags-968082       | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-968082                | docker-flags-968082       | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:52 UTC |
	| start   | -p cert-options-256388                | cert-options-256388       | jenkins | v1.33.1 | 06 Aug 24 07:52 UTC | 06 Aug 24 07:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | cert-options-256388 ssh               | cert-options-256388       | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-256388 -- sudo        | cert-options-256388       | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-256388                | cert-options-256388       | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:53 UTC |
	| start   | -p kubernetes-upgrade-473733          | kubernetes-upgrade-473733 | jenkins | v1.33.1 | 06 Aug 24 07:53 UTC | 06 Aug 24 07:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-473733          | kubernetes-upgrade-473733 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC | 06 Aug 24 07:54 UTC |
	| start   | -p kubernetes-upgrade-473733          | kubernetes-upgrade-473733 | jenkins | v1.33.1 | 06 Aug 24 07:54 UTC | 06 Aug 24 07:59 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-834751             | cert-expiration-834751    | jenkins | v1.33.1 | 06 Aug 24 07:55 UTC | 06 Aug 24 07:56 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-834751             | cert-expiration-834751    | jenkins | v1.33.1 | 06 Aug 24 07:56 UTC | 06 Aug 24 07:56 UTC |
	| start   | -p missing-upgrade-779445             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 07:56 UTC | 06 Aug 24 07:57 UTC |
	|         | --memory=2200 --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-779445             | missing-upgrade-779445    | jenkins | v1.33.1 | 06 Aug 24 07:57 UTC | 06 Aug 24 07:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-779445             | missing-upgrade-779445    | jenkins | v1.33.1 | 06 Aug 24 07:58 UTC | 06 Aug 24 07:58 UTC |
	| start   | -p running-upgrade-292442             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 07:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-473733          | kubernetes-upgrade-473733 | jenkins | v1.33.1 | 06 Aug 24 07:59 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-473733          | kubernetes-upgrade-473733 | jenkins | v1.33.1 | 06 Aug 24 07:59 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:59:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:59:08.315500 1167706 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:59:08.315623 1167706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:59:08.315635 1167706 out.go:304] Setting ErrFile to fd 2...
	I0806 07:59:08.315640 1167706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:59:08.315895 1167706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:59:08.316247 1167706 out.go:298] Setting JSON to false
	I0806 07:59:08.317549 1167706 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20492,"bootTime":1722910656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:59:08.317660 1167706 start.go:139] virtualization:  
	I0806 07:59:08.320567 1167706 out.go:177] * [kubernetes-upgrade-473733] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0806 07:59:08.322611 1167706 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:59:08.322774 1167706 notify.go:220] Checking for updates...
	I0806 07:59:08.327214 1167706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:59:08.329196 1167706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:59:08.330984 1167706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:59:08.332564 1167706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0806 07:59:08.334418 1167706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:59:08.337573 1167706 config.go:182] Loaded profile config "kubernetes-upgrade-473733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 07:59:08.338101 1167706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:59:08.366239 1167706 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:59:08.366374 1167706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:59:08.428424 1167706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-06 07:59:08.418822948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:59:08.428538 1167706 docker.go:307] overlay module found
	I0806 07:59:08.430308 1167706 out.go:177] * Using the docker driver based on existing profile
	I0806 07:59:08.432209 1167706 start.go:297] selected driver: docker
	I0806 07:59:08.432221 1167706 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-473733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-473733 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:59:08.432338 1167706 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:59:08.433018 1167706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:59:08.489654 1167706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-06 07:59:08.479520689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:59:08.490037 1167706 cni.go:84] Creating CNI manager for ""
	I0806 07:59:08.490062 1167706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 07:59:08.490120 1167706 start.go:340] cluster config:
	{Name:kubernetes-upgrade-473733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-473733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:59:08.493173 1167706 out.go:177] * Starting "kubernetes-upgrade-473733" primary control-plane node in "kubernetes-upgrade-473733" cluster
	I0806 07:59:08.495208 1167706 cache.go:121] Beginning downloading kic base image for docker with docker
	I0806 07:59:08.496915 1167706 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0806 07:59:08.498554 1167706 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 07:59:08.498612 1167706 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 07:59:08.498617 1167706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0806 07:59:08.498621 1167706 cache.go:56] Caching tarball of preloaded images
	I0806 07:59:08.498844 1167706 preload.go:172] Found /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 07:59:08.498856 1167706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 07:59:08.498986 1167706 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubernetes-upgrade-473733/config.json ...
	W0806 07:59:08.529357 1167706 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0806 07:59:08.529379 1167706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0806 07:59:08.529477 1167706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0806 07:59:08.529500 1167706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0806 07:59:08.529512 1167706 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0806 07:59:08.529521 1167706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0806 07:59:08.529527 1167706 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0806 07:59:08.666649 1167706 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0806 07:59:08.666692 1167706 cache.go:194] Successfully downloaded all kic artifacts
	I0806 07:59:08.666722 1167706 start.go:360] acquireMachinesLock for kubernetes-upgrade-473733: {Name:mke43a03255dc09d0ea9df96a9b8641c9cefe583 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 07:59:08.666790 1167706 start.go:364] duration metric: took 42.608µs to acquireMachinesLock for "kubernetes-upgrade-473733"
	I0806 07:59:08.666817 1167706 start.go:96] Skipping create...Using existing machine configuration
	I0806 07:59:08.666846 1167706 fix.go:54] fixHost starting: 
	I0806 07:59:08.667139 1167706 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-473733 --format={{.State.Status}}
	I0806 07:59:08.683714 1167706 fix.go:112] recreateIfNeeded on kubernetes-upgrade-473733: state=Running err=<nil>
	W0806 07:59:08.683743 1167706 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 07:59:08.686976 1167706 out.go:177] * Updating the running docker "kubernetes-upgrade-473733" container ...
	I0806 07:59:05.463263 1166558 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v running-upgrade-292442:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -I lz4 -xf /preloaded.tar -C /extractDir: (4.889133452s)
	I0806 07:59:05.463283 1166558 kic.go:188] duration metric: took 4.889259 seconds to extract preloaded images to volume
	W0806 07:59:05.463421 1166558 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0806 07:59:05.463588 1166558 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0806 07:59:05.551131 1166558 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-292442 --name running-upgrade-292442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-292442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-292442 --network running-upgrade-292442 --ip 192.168.85.2 --volume running-upgrade-292442:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95
	I0806 07:59:05.973862 1166558 cli_runner.go:164] Run: docker container inspect running-upgrade-292442 --format={{.State.Running}}
	I0806 07:59:06.015847 1166558 cli_runner.go:164] Run: docker container inspect running-upgrade-292442 --format={{.State.Status}}
	I0806 07:59:06.041257 1166558 cli_runner.go:164] Run: docker exec running-upgrade-292442 stat /var/lib/dpkg/alternatives/iptables
	I0806 07:59:06.126559 1166558 oci.go:144] the created container "running-upgrade-292442" has a running status.
	I0806 07:59:06.126576 1166558 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa...
	I0806 07:59:06.487418 1166558 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0806 07:59:06.522620 1166558 cli_runner.go:164] Run: docker container inspect running-upgrade-292442 --format={{.State.Status}}
	I0806 07:59:06.557270 1166558 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0806 07:59:06.557282 1166558 kic_runner.go:114] Args: [docker exec --privileged running-upgrade-292442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0806 07:59:06.670893 1166558 cli_runner.go:164] Run: docker container inspect running-upgrade-292442 --format={{.State.Status}}
	I0806 07:59:06.698654 1166558 machine.go:88] provisioning docker machine ...
	I0806 07:59:06.698675 1166558 ubuntu.go:169] provisioning hostname "running-upgrade-292442"
	I0806 07:59:06.698733 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:06.739121 1166558 main.go:134] libmachine: Using SSH client type: native
	I0806 07:59:06.739322 1166558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x390b20] 0x3936b0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I0806 07:59:06.739333 1166558 main.go:134] libmachine: About to run SSH command:
	sudo hostname running-upgrade-292442 && echo "running-upgrade-292442" | sudo tee /etc/hostname
	I0806 07:59:06.739892 1166558 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34166->127.0.0.1:33843: read: connection reset by peer
	I0806 07:59:08.690109 1167706 machine.go:94] provisionDockerMachine start ...
	I0806 07:59:08.690203 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:08.706573 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:08.706858 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:08.706875 1167706 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 07:59:08.847356 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-473733
	
	I0806 07:59:08.847389 1167706 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-473733"
	I0806 07:59:08.847585 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:08.866298 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:08.866551 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:08.866626 1167706 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-473733 && echo "kubernetes-upgrade-473733" | sudo tee /etc/hostname
	I0806 07:59:09.020476 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-473733
	
	I0806 07:59:09.020568 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:09.041409 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:09.041685 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:09.041708 1167706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-473733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-473733/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-473733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:59:09.179429 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:59:09.179551 1167706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19370-879111/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-879111/.minikube}
	I0806 07:59:09.179574 1167706 ubuntu.go:177] setting up certificates
	I0806 07:59:09.179597 1167706 provision.go:84] configureAuth start
	I0806 07:59:09.179656 1167706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-473733
	I0806 07:59:09.197653 1167706 provision.go:143] copyHostCerts
	I0806 07:59:09.197735 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem, removing ...
	I0806 07:59:09.197750 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem
	I0806 07:59:09.197834 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem (1082 bytes)
	I0806 07:59:09.197946 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem, removing ...
	I0806 07:59:09.197958 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem
	I0806 07:59:09.197987 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem (1123 bytes)
	I0806 07:59:09.198071 1167706 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem, removing ...
	I0806 07:59:09.198082 1167706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem
	I0806 07:59:09.198110 1167706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem (1679 bytes)
	I0806 07:59:09.198177 1167706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-473733 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-473733 localhost minikube]
	I0806 07:59:10.656343 1167706 provision.go:177] copyRemoteCerts
	I0806 07:59:10.656450 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:59:10.656514 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:10.680273 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:10.778029 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 07:59:10.849122 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 07:59:10.984149 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 07:59:11.080952 1167706 provision.go:87] duration metric: took 1.901339623s to configureAuth
	I0806 07:59:11.080986 1167706 ubuntu.go:193] setting minikube options for container-runtime
	I0806 07:59:11.081221 1167706 config.go:182] Loaded profile config "kubernetes-upgrade-473733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 07:59:11.081297 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.118142 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.118430 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.118449 1167706 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 07:59:11.449582 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0806 07:59:11.449600 1167706 ubuntu.go:71] root file system type: overlay
	I0806 07:59:11.449715 1167706 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 07:59:11.449778 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.487491 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.487828 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.487916 1167706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 07:59:11.785735 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 07:59:11.785825 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:11.813099 1167706 main.go:141] libmachine: Using SSH client type: native
	I0806 07:59:11.813348 1167706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I0806 07:59:11.813372 1167706 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 07:59:12.059353 1167706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:59:12.059385 1167706 machine.go:97] duration metric: took 3.369256653s to provisionDockerMachine
	I0806 07:59:12.059398 1167706 start.go:293] postStartSetup for "kubernetes-upgrade-473733" (driver="docker")
	I0806 07:59:12.059412 1167706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:59:12.059521 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:59:12.059573 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.093044 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.242143 1167706 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:59:12.259383 1167706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0806 07:59:12.259560 1167706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0806 07:59:12.259594 1167706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0806 07:59:12.259617 1167706 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0806 07:59:12.259644 1167706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/addons for local assets ...
	I0806 07:59:12.259725 1167706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/files for local assets ...
	I0806 07:59:12.259862 1167706 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem -> 8844952.pem in /etc/ssl/certs
	I0806 07:59:12.260023 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:59:12.274383 1167706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem --> /etc/ssl/certs/8844952.pem (1708 bytes)
	I0806 07:59:12.379668 1167706 start.go:296] duration metric: took 320.254182ms for postStartSetup
	I0806 07:59:12.379752 1167706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:59:12.379795 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.413127 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.556274 1167706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0806 07:59:12.576343 1167706 fix.go:56] duration metric: took 3.909511968s for fixHost
	I0806 07:59:12.576366 1167706 start.go:83] releasing machines lock for "kubernetes-upgrade-473733", held for 3.909562657s
	I0806 07:59:12.576442 1167706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-473733
	I0806 07:59:12.603268 1167706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 07:59:12.603348 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.603565 1167706 ssh_runner.go:195] Run: cat /version.json
	I0806 07:59:12.603608 1167706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-473733
	I0806 07:59:12.650249 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:12.651834 1167706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/kubernetes-upgrade-473733/id_rsa Username:docker}
	I0806 07:59:13.115144 1167706 ssh_runner.go:195] Run: systemctl --version
	I0806 07:59:13.127694 1167706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 07:59:13.148984 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0806 07:59:13.182038 1167706 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0806 07:59:13.182127 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0806 07:59:13.205127 1167706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0806 07:59:13.231787 1167706 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 07:59:13.231828 1167706 start.go:495] detecting cgroup driver to use...
	I0806 07:59:13.231868 1167706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0806 07:59:13.231969 1167706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:59:13.255844 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0806 07:59:13.269902 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 07:59:13.291391 1167706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 07:59:13.291503 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 07:59:13.315824 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 07:59:09.909192 1166558 main.go:134] libmachine: SSH cmd err, output: <nil>: running-upgrade-292442
	
	I0806 07:59:09.909269 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:09.930051 1166558 main.go:134] libmachine: Using SSH client type: native
	I0806 07:59:09.930209 1166558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x390b20] 0x3936b0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I0806 07:59:09.930226 1166558 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-292442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-292442/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-292442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 07:59:10.072036 1166558 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0806 07:59:10.072054 1166558 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19370-879111/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-879111/.minikube}
	I0806 07:59:10.072084 1166558 ubuntu.go:177] setting up certificates
	I0806 07:59:10.072097 1166558 provision.go:83] configureAuth start
	I0806 07:59:10.072167 1166558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-292442
	I0806 07:59:10.130403 1166558 provision.go:138] copyHostCerts
	I0806 07:59:10.130462 1166558 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem, removing ...
	I0806 07:59:10.130473 1166558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem
	I0806 07:59:10.130537 1166558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/ca.pem (1082 bytes)
	I0806 07:59:10.130628 1166558 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem, removing ...
	I0806 07:59:10.130632 1166558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem
	I0806 07:59:10.130652 1166558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/cert.pem (1123 bytes)
	I0806 07:59:10.130694 1166558 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem, removing ...
	I0806 07:59:10.130697 1166558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem
	I0806 07:59:10.130724 1166558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-879111/.minikube/key.pem (1679 bytes)
	I0806 07:59:10.130765 1166558 provision.go:112] generating server cert: /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-292442 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-292442]
	I0806 07:59:10.961390 1166558 provision.go:172] copyRemoteCerts
	I0806 07:59:10.961472 1166558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 07:59:10.961562 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:10.988364 1166558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa Username:docker}
	I0806 07:59:11.102277 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 07:59:11.146892 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 07:59:11.186961 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 07:59:11.216745 1166558 provision.go:86] duration metric: configureAuth took 1.144633386s
	I0806 07:59:11.216770 1166558 ubuntu.go:193] setting minikube options for container-runtime
	I0806 07:59:11.216994 1166558 config.go:178] Loaded profile config "running-upgrade-292442": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 07:59:11.217064 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:11.259254 1166558 main.go:134] libmachine: Using SSH client type: native
	I0806 07:59:11.259542 1166558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x390b20] 0x3936b0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I0806 07:59:11.259553 1166558 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 07:59:11.429681 1166558 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0806 07:59:11.429693 1166558 ubuntu.go:71] root file system type: overlay
	I0806 07:59:11.429868 1166558 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 07:59:11.429942 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:11.463291 1166558 main.go:134] libmachine: Using SSH client type: native
	I0806 07:59:11.463502 1166558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x390b20] 0x3936b0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I0806 07:59:11.463586 1166558 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 07:59:11.630499 1166558 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 07:59:11.630631 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:11.669685 1166558 main.go:134] libmachine: Using SSH client type: native
	I0806 07:59:11.669896 1166558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x390b20] 0x3936b0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I0806 07:59:11.669919 1166558 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 07:59:12.898328 1166558 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:00:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-06 07:59:11.623055781 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0806 07:59:12.898346 1166558 machine.go:91] provisioned docker machine in 6.199680129s
	I0806 07:59:12.898356 1166558 client.go:171] LocalClient.Create took 13.637982704s
	I0806 07:59:12.898379 1166558 start.go:173] duration metric: libmachine.API.Create for "running-upgrade-292442" took 13.638032451s
	I0806 07:59:12.898386 1166558 start.go:306] post-start starting for "running-upgrade-292442" (driver="docker")
	I0806 07:59:12.898390 1166558 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 07:59:12.898449 1166558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 07:59:12.898499 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:12.922155 1166558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa Username:docker}
	I0806 07:59:13.027979 1166558 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 07:59:13.031589 1166558 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0806 07:59:13.031606 1166558 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0806 07:59:13.031617 1166558 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0806 07:59:13.031623 1166558 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0806 07:59:13.031632 1166558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/addons for local assets ...
	I0806 07:59:13.031688 1166558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-879111/.minikube/files for local assets ...
	I0806 07:59:13.031764 1166558 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem -> 8844952.pem in /etc/ssl/certs
	I0806 07:59:13.031867 1166558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 07:59:13.041672 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem --> /etc/ssl/certs/8844952.pem (1708 bytes)
	I0806 07:59:13.077317 1166558 start.go:309] post-start completed in 178.916702ms
	I0806 07:59:13.077688 1166558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-292442
	I0806 07:59:13.098062 1166558 profile.go:148] Saving config to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/config.json ...
	I0806 07:59:13.098319 1166558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:59:13.098356 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:13.127626 1166558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa Username:docker}
	I0806 07:59:13.220807 1166558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0806 07:59:13.225482 1166558 start.go:134] duration metric: createHost completed in 13.967574846s
	I0806 07:59:13.225505 1166558 start.go:81] releasing machines lock for "running-upgrade-292442", held for 13.967712115s
	I0806 07:59:13.225606 1166558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-292442
	I0806 07:59:13.246180 1166558 ssh_runner.go:195] Run: systemctl --version
	I0806 07:59:13.246223 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:13.246241 1166558 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0806 07:59:13.246300 1166558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-292442
	I0806 07:59:13.280309 1166558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa Username:docker}
	I0806 07:59:13.285479 1166558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/running-upgrade-292442/id_rsa Username:docker}
	I0806 07:59:13.384751 1166558 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 07:59:13.491182 1166558 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0806 07:59:13.491239 1166558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 07:59:13.504947 1166558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:59:13.537141 1166558 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 07:59:13.659304 1166558 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 07:59:13.802851 1166558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:13.962782 1166558 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 07:59:14.359255 1166558 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 07:59:14.456903 1166558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:14.556734 1166558 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0806 07:59:14.569752 1166558 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 07:59:14.569820 1166558 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 07:59:14.573433 1166558 start.go:468] Will wait 60s for crictl version
	I0806 07:59:14.573491 1166558 ssh_runner.go:195] Run: sudo crictl version
	I0806 07:59:14.693669 1166558 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0806 07:59:14.693735 1166558 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 07:59:14.733867 1166558 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 07:59:14.780940 1166558 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
	I0806 07:59:14.781030 1166558 cli_runner.go:164] Run: docker network inspect running-upgrade-292442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0806 07:59:14.795019 1166558 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0806 07:59:14.798645 1166558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:59:14.809025 1166558 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 07:59:14.809076 1166558 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 07:59:14.846153 1166558 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 07:59:14.846167 1166558 docker.go:533] Images already preloaded, skipping extraction
	I0806 07:59:14.846231 1166558 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 07:59:14.882330 1166558 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 07:59:14.882347 1166558 cache_images.go:84] Images are preloaded, skipping loading
	I0806 07:59:14.882414 1166558 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 07:59:14.982856 1166558 cni.go:95] Creating CNI manager for ""
	I0806 07:59:14.982868 1166558 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0806 07:59:14.982876 1166558 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0806 07:59:14.982890 1166558 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-292442 NodeName:running-upgrade-292442 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0806 07:59:14.983025 1166558 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "running-upgrade-292442"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 07:59:14.983098 1166558 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=running-upgrade-292442 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-292442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0806 07:59:14.983156 1166558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0806 07:59:14.990966 1166558 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 07:59:14.991031 1166558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 07:59:14.998963 1166558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
	I0806 07:59:15.018327 1166558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 07:59:15.039184 1166558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
	I0806 07:59:15.056796 1166558 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0806 07:59:15.060671 1166558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 07:59:15.072492 1166558 certs.go:54] Setting up /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442 for IP: 192.168.85.2
	I0806 07:59:15.072619 1166558 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/19370-879111/.minikube/ca.key
	I0806 07:59:15.072661 1166558 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/19370-879111/.minikube/proxy-client-ca.key
	I0806 07:59:15.072711 1166558 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.key
	I0806 07:59:15.072722 1166558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.crt with IP's: []
	I0806 07:59:15.393649 1166558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.crt ...
	I0806 07:59:15.393663 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.crt: {Name:mk83dbac0603f88943151b8371ca146c2d1bb706 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:15.393916 1166558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.key ...
	I0806 07:59:15.393923 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/client.key: {Name:mka31ced2acabb636e0fcf73197cb1deb08f8751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:15.394051 1166558 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key.43b9df8c
	I0806 07:59:15.394062 1166558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0806 07:59:15.726681 1166558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt.43b9df8c ...
	I0806 07:59:15.726697 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt.43b9df8c: {Name:mk312a36ee29af178a820fcd94ff0351ea5c33e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:15.726943 1166558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key.43b9df8c ...
	I0806 07:59:15.726950 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key.43b9df8c: {Name:mk792e8f5ee2ed1ab5f84a494b69813c244b2cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:15.727061 1166558 certs.go:320] copying /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt
	I0806 07:59:15.727122 1166558 certs.go:324] copying /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key
	I0806 07:59:15.727173 1166558 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.key
	I0806 07:59:15.727184 1166558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.crt with IP's: []
	I0806 07:59:16.338502 1166558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.crt ...
	I0806 07:59:16.338517 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.crt: {Name:mk2b7f135ae44de839042b7c54b8425222660ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:16.338746 1166558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.key ...
	I0806 07:59:16.338754 1166558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.key: {Name:mk58f7a657f75adfce01f448de1d8a832a68c4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:59:16.338960 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/884495.pem (1338 bytes)
	W0806 07:59:16.338997 1166558 certs.go:384] ignoring /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/884495_empty.pem, impossibly tiny 0 bytes
	I0806 07:59:16.339005 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 07:59:16.339031 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/ca.pem (1082 bytes)
	I0806 07:59:16.339053 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/cert.pem (1123 bytes)
	I0806 07:59:16.339078 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/certs/home/jenkins/minikube-integration/19370-879111/.minikube/certs/key.pem (1679 bytes)
	I0806 07:59:16.339117 1166558 certs.go:388] found cert: /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem (1708 bytes)
	I0806 07:59:16.339729 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0806 07:59:16.360623 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 07:59:16.381085 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 07:59:16.404642 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/running-upgrade-292442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 07:59:16.425506 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 07:59:16.446317 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 07:59:16.468681 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 07:59:16.489541 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 07:59:16.513482 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 07:59:16.534730 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/certs/884495.pem --> /usr/share/ca-certificates/884495.pem (1338 bytes)
	I0806 07:59:16.555174 1166558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/ssl/certs/8844952.pem --> /usr/share/ca-certificates/8844952.pem (1708 bytes)
	I0806 07:59:16.575596 1166558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 07:59:16.590556 1166558 ssh_runner.go:195] Run: openssl version
	I0806 07:59:16.596325 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8844952.pem && ln -fs /usr/share/ca-certificates/8844952.pem /etc/ssl/certs/8844952.pem"
	I0806 07:59:16.605700 1166558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8844952.pem
	I0806 07:59:16.608992 1166558 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:13 /usr/share/ca-certificates/8844952.pem
	I0806 07:59:16.609045 1166558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8844952.pem
	I0806 07:59:16.614784 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8844952.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 07:59:16.623018 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 07:59:16.631069 1166558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:59:16.634574 1166558 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:06 /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:59:16.634627 1166558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 07:59:16.640456 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 07:59:16.648802 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/884495.pem && ln -fs /usr/share/ca-certificates/884495.pem /etc/ssl/certs/884495.pem"
	I0806 07:59:16.656839 1166558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/884495.pem
	I0806 07:59:16.660120 1166558 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:13 /usr/share/ca-certificates/884495.pem
	I0806 07:59:16.660170 1166558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/884495.pem
	I0806 07:59:16.665612 1166558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/884495.pem /etc/ssl/certs/51391683.0"
	I0806 07:59:16.674074 1166558 kubeadm.go:395] StartCluster: {Name:running-upgrade-292442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-292442 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0806 07:59:16.674195 1166558 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 07:59:16.709302 1166558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 07:59:16.717111 1166558 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 07:59:16.724640 1166558 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0806 07:59:16.724696 1166558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 07:59:16.732632 1166558 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 07:59:16.732663 1166558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0806 07:59:13.342863 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 07:59:13.353796 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 07:59:13.364876 1167706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 07:59:13.374708 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 07:59:13.405203 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 07:59:13.464012 1167706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 07:59:13.479141 1167706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 07:59:13.493542 1167706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 07:59:13.521007 1167706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:13.688230 1167706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 07:59:17.046478 1166558 out.go:204]   - Generating certificates and keys ...
	I0806 07:59:22.552766 1166558 out.go:204]   - Booting up control plane ...
	I0806 07:59:24.049169 1167706 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.360901477s)
	I0806 07:59:24.049194 1167706 start.go:495] detecting cgroup driver to use...
	I0806 07:59:24.049230 1167706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0806 07:59:24.049279 1167706 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 07:59:24.066684 1167706 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0806 07:59:24.066759 1167706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 07:59:24.083609 1167706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 07:59:24.108060 1167706 ssh_runner.go:195] Run: which cri-dockerd
	I0806 07:59:24.113833 1167706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 07:59:24.124056 1167706 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0806 07:59:24.143698 1167706 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 07:59:24.253351 1167706 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 07:59:24.398172 1167706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 07:59:24.398317 1167706 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 07:59:24.427431 1167706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 07:59:24.530481 1167706 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 07:59:24.594943 1167706 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 07:59:24.618953 1167706 out.go:177] 
	W0806 07:59:24.620678 1167706 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:54:28 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:28 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:28.982784895Z" level=info msg="Starting up"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.007226669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.020947661Z" level=info msg="Loading containers: start."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.195087421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.250781871Z" level=info msg="Loading containers: done."
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264220646Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.264314641Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:29 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295562911Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:29 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:29.295738177Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.498289159Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500277453Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[276]: time="2024-08-06T07:54:34.500743404Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:34 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.661126858Z" level=info msg="Starting up"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.690669608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.713507679Z" level=info msg="Loading containers: start."
	Aug 06 07:54:34 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:34.939391089Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.018362189Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032305116Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.032388314Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.037117672Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067478791Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.067596335Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070495344Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[809]: time="2024-08-06T07:54:35.070733238Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.118039265Z" level=info msg="Starting up"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.140415283Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.153552838Z" level=info msg="Loading containers: start."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.343031110Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.384135707Z" level=info msg="Loading containers: done."
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396493547Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.396572659Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426045044Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:35 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:35.426275324Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:54:35 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.793201483Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795765936Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1072]: time="2024-08-06T07:54:41.795964479Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:54:41 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.838051141Z" level=info msg="Starting up"
	Aug 06 07:54:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:41.862812366Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.549566992Z" level=info msg="Loading containers: start."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.716440912Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.758282476Z" level=info msg="Loading containers: done."
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770203715Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.770280103Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801262335Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:54:42 kubernetes-upgrade-473733 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:54:42 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:54:42.801395592Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:55:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:08.184139618Z" level=info msg="ignoring event" container=ccef9156af7fbe1b796b9040a14de52f493aa699297a7e3441b434daa3ecff6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.205176343Z" level=info msg="ignoring event" container=c89cbaed1e927c2f1fbe5e70ec42ece0068d04124c670a04d86ecdead9e9b2a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:29 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:29.940113710Z" level=info msg="ignoring event" container=4f17de248cc65e948965d131a79e0defc245b509639d5edeeb61c08aeafb6224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:41 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:55:41.976492443Z" level=info msg="ignoring event" container=8b0b371e338782aad76f095da8b5efd54b9a034c3b696f72480414bd61f4cd77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:09 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:09.140504054Z" level=info msg="ignoring event" container=33dfc14da1883dd53617ccba2bc2f659b27f78b897542363916d13e3777ed1df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:20 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:20.149573580Z" level=info msg="ignoring event" container=e99d1ebfc9b156453d9df9f4686fb6f09ffe0a06a3766831235ae239a586a1e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:56:57 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:56:57.932084317Z" level=info msg="ignoring event" container=ba46a40760deccd8d2b87f7fe8630e9ad8f789c11604173d87c14f48cd62e636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:57:08 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:57:08.956266666Z" level=info msg="ignoring event" container=07c9e0b5c709a767e4d5f74acd1997c7334664792e54fffa4667322fe0c8e3bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:01 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:01.361468687Z" level=info msg="ignoring event" container=8bc392bb375d02a89afb76b15fd6a5e6f338b449e4a75c4ecbea357f92e478cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:12 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:12.381489817Z" level=info msg="ignoring event" container=51e6f5c664348ebf4bb1e12517cbc2876ba623a7f5ebcb819db59b69e064a448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.143146114Z" level=info msg="ignoring event" container=d7c1110fc7744f8cef459ac9b29e7af212968c0eca608f514cb6242ddd07a24b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.227014502Z" level=info msg="ignoring event" container=9cba2c1eabd39d80695c125e39103faa089fd1f520e51db4c001d326d776fd36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.328048224Z" level=info msg="ignoring event" container=e562f6cf886c63356d0af139642755fe27be0fa42750b81a00842a376dd2ca65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:58:50 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:58:50.432736039Z" level=info msg="ignoring event" container=405026d69170c13883f94d5c3d01fed25cd23d2f548ec10f46e0bab497f659b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.708098018Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.970582442Z" level=info msg="ignoring event" container=191a74b1a92663f8ad4852f0cd67b90c025b6dc58677a4b04d38f511b9252c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:13 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:13.980267195Z" level=info msg="ignoring event" container=d96c662f207ff4edd3b075a28594efe9f3cc4bccefac9b980255110001a6e190 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.024092135Z" level=info msg="ignoring event" container=5df65826498ffa5bfe199a2abdaba2304dee22a66d7ae26d7ec6ac6471476580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.035079956Z" level=info msg="ignoring event" container=93f38ef14ca1a6a715f56d870417edec44229ceff62bef18f1537ec66db49ce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056417707Z" level=info msg="ignoring event" container=167edd45891d680e8fea99d07a8ea8dcb471ca89ae77b71829a04114c186e923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056471728Z" level=info msg="ignoring event" container=6df58aa48c0aa0fcc200394dd6c1d56e27f817b0b692154a1e39d516e9132d6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.056500388Z" level=info msg="ignoring event" container=9d7a14e7d2ed0e05c5e4230c4b7d5f33937b37b530527824914ebc7488e56c86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.128501684Z" level=info msg="ignoring event" container=fbb0b29c0b760be2641278ccecf58ef7d405b4bae98fbdeaaddac73e9993c8f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189072560Z" level=info msg="ignoring event" container=e14a0717ba7e5abb2556615ca03bce0368500b10787976e13fcf66f07954d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189125285Z" level=info msg="ignoring event" container=931d0cf38f4164152362ac73611947dfb7b818d1787c6917761ee38cf6455430 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189154026Z" level=info msg="ignoring event" container=7de233655ab48863c8775d476f4105f295a01822e0f7470e3fa5998fee116149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189171905Z" level=info msg="ignoring event" container=a78e3de62a6109945ba7fdbd4320addd16c2d168430910f6400d45e454f8c252 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:14 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:14.189194477Z" level=info msg="ignoring event" container=6dae9807d021f5b611b16ef76724238d00a5f13b1d0ec7aef55d007d153f5d92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.173418238Z" level=info msg="ignoring event" container=dabd770834038015a989eb6cc43f3b3791df209e3a416da0ec1bf79149bbfc0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.176274994Z" level=info msg="ignoring event" container=d248a4aa9cdaa7ae7b9378771b93e46c2195189f67e90c78e3dd00da17d11d12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.833199755Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.873403418Z" level=info msg="ignoring event" container=8f59b204c9af0bdc56285693e0548bbf041c6fa29b10b4f1a24095340b0f346b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.916239472Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:59:23 kubernetes-upgrade-473733 dockerd[1466]: time="2024-08-06T07:59:23.917246457Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:23 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10551]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10592]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10637]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 07:59:24.620777 1167706 out.go:239] * 
	W0806 07:59:24.621745 1167706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 07:59:24.624172 1167706 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10637]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:59:24 kubernetes-upgrade-473733 dockerd[10654]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:24 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 07:59:25 kubernetes-upgrade-473733 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 07:59:25 kubernetes-upgrade-473733 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:59:25 kubernetes-upgrade-473733 systemd[1]: docker.service: Start request repeated too quickly.
	Aug 06 07:59:25 kubernetes-upgrade-473733 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:25 kubernetes-upgrade-473733 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0806 07:59:25.477259   10730 remote_runtime.go:570] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2024-08-06T07:59:25Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000998] FS-Cache: O-key=[8] '22d4c90000000000'
	[  +0.000670] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000901] FS-Cache: N-cookie d=000000004316bd8a{9p.inode} n=00000000c5bb7fb5
	[  +0.001027] FS-Cache: N-key=[8] '22d4c90000000000'
	[  +3.313414] FS-Cache: Duplicate cookie detected
	[  +0.000660] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000889] FS-Cache: O-cookie d=000000004316bd8a{9p.inode} n=00000000588b0bba
	[  +0.001006] FS-Cache: O-key=[8] '21d4c90000000000'
	[  +0.000644] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000860] FS-Cache: N-cookie d=000000004316bd8a{9p.inode} n=00000000501f87da
	[  +0.000960] FS-Cache: N-key=[8] '21d4c90000000000'
	[  +0.310324] FS-Cache: Duplicate cookie detected
	[  +0.000669] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000892] FS-Cache: O-cookie d=000000004316bd8a{9p.inode} n=0000000080475bdd
	[  +0.000968] FS-Cache: O-key=[8] '27d4c90000000000'
	[  +0.000673] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000858] FS-Cache: N-cookie d=000000004316bd8a{9p.inode} n=000000003bfdc3ac
	[  +0.000970] FS-Cache: N-key=[8] '27d4c90000000000'
	[  +4.013225] FS-Cache: Duplicate cookie detected
	[  +0.000651] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000932] FS-Cache: O-cookie d=000000006cc34631{9P.session} n=000000009a44c2fe
	[  +0.000984] FS-Cache: O-key=[10] '34323939333732303530'
	[  +0.000770] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000869] FS-Cache: N-cookie d=000000006cc34631{9P.session} n=00000000c41d3a3f
	[  +0.001007] FS-Cache: N-key=[10] '34323939333732303530'
	
	
	==> kernel <==
	 07:59:25 up  5:41,  0 users,  load average: 3.37, 3.32, 2.98
	Linux kubernetes-upgrade-473733 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kubelet <==
	Aug 06 07:59:21 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:21.473336    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Aug 06 07:59:21 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:21.473827    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:21 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:21.473916    9204 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.475808    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.475864    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.475877    9204 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.753002    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.753252    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.753342    9204 kubelet_pods.go:1191] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.753420    9204 kubelet.go:2508] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:22 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:22.802392    9204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-473733?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="6.4s"
	Aug 06 07:59:23 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:23.477042    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Aug 06 07:59:23 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:23.477544    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:23 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:23.477656    9204 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.142101    9204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-kubernetes-upgrade-473733.17e914c19306d519  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-kubernetes-upgrade-473733,UID:6b3959ba09a601ef37f4746d6c2e0665,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-473733,},FirstTimestamp:2024-08-06 07:59:14.073867545 +0000 UTC m=+8.049092507,LastTimestamp:
2024-08-06 07:59:14.073867545 +0000 UTC m=+8.049092507,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-473733,}"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.481450    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.481505    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.481517    9204 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.753079    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.753138    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.753154    9204 kubelet_pods.go:1191] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:24 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:24.753169    9204 kubelet.go:2508] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:25 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:25.482897    9204 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Aug 06 07:59:25 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:25.482949    9204 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 06 07:59:25 kubernetes-upgrade-473733 kubelet[9204]: E0806 07:59:25.482963    9204 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 07:59:25.302267 1170460 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.317946 1170460 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.333578 1170460 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.349371 1170460 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.365307 1170460 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.381396 1170460 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.397525 1170460 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 07:59:25.413180 1170460 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-473733 -n kubernetes-upgrade-473733
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-473733 -n kubernetes-upgrade-473733: exit status 2 (305.63972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-473733" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-473733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-473733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-473733: (2.004052499s)
--- FAIL: TestKubernetesUpgrade (355.02s)

                                                
                                    

Test pass (323/351)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 4.72
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.31.0-rc.0/json-events 5.24
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.21
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.55
31 TestOffline 59.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 231.08
38 TestAddons/serial/Volcano 40.88
40 TestAddons/serial/GCPAuth/Namespaces 0.17
42 TestAddons/parallel/Registry 17.02
43 TestAddons/parallel/Ingress 21.19
44 TestAddons/parallel/InspektorGadget 11.82
45 TestAddons/parallel/MetricsServer 5.73
48 TestAddons/parallel/CSI 55.11
49 TestAddons/parallel/Headlamp 17.59
50 TestAddons/parallel/CloudSpanner 6.54
51 TestAddons/parallel/LocalPath 54.09
52 TestAddons/parallel/NvidiaDevicePlugin 6.55
53 TestAddons/parallel/Yakd 11.7
54 TestAddons/StoppedEnableDisable 11.2
55 TestCertOptions 36.87
56 TestCertExpiration 250.08
57 TestDockerFlags 46.42
58 TestForceSystemdFlag 47.14
59 TestForceSystemdEnv 46.55
65 TestErrorSpam/setup 31.89
66 TestErrorSpam/start 0.75
67 TestErrorSpam/status 1
68 TestErrorSpam/pause 1.28
69 TestErrorSpam/unpause 1.4
70 TestErrorSpam/stop 2.04
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 47.49
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 29.83
77 TestFunctional/serial/KubeContext 0.07
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
82 TestFunctional/serial/CacheCmd/cache/add_local 1.01
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
84 TestFunctional/serial/CacheCmd/cache/list 0.06
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
87 TestFunctional/serial/CacheCmd/cache/delete 0.14
88 TestFunctional/serial/MinikubeKubectlCmd 0.15
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
90 TestFunctional/serial/ExtraConfig 43.48
91 TestFunctional/serial/ComponentHealth 0.11
92 TestFunctional/serial/LogsCmd 1.19
93 TestFunctional/serial/LogsFileCmd 1.22
94 TestFunctional/serial/InvalidService 4.81
96 TestFunctional/parallel/ConfigCmd 0.46
97 TestFunctional/parallel/DashboardCmd 11.99
98 TestFunctional/parallel/DryRun 0.53
99 TestFunctional/parallel/InternationalLanguage 0.22
100 TestFunctional/parallel/StatusCmd 1.36
104 TestFunctional/parallel/ServiceCmdConnect 13.76
105 TestFunctional/parallel/AddonsCmd 0.2
106 TestFunctional/parallel/PersistentVolumeClaim 28.41
108 TestFunctional/parallel/SSHCmd 0.66
109 TestFunctional/parallel/CpCmd 2.38
111 TestFunctional/parallel/FileSync 0.34
112 TestFunctional/parallel/CertSync 2.27
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
120 TestFunctional/parallel/License 0.3
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.28
133 TestFunctional/parallel/ServiceCmd/List 0.64
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
135 TestFunctional/parallel/ProfileCmd/profile_list 0.51
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
139 TestFunctional/parallel/MountCmd/any-port 8.59
140 TestFunctional/parallel/ServiceCmd/Format 0.54
141 TestFunctional/parallel/ServiceCmd/URL 0.4
142 TestFunctional/parallel/MountCmd/specific-port 2.23
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
144 TestFunctional/parallel/Version/short 0.09
145 TestFunctional/parallel/Version/components 1.29
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
151 TestFunctional/parallel/ImageCommands/Setup 0.72
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/parallel/DockerEnv/bash 1.38
157 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
158 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
159 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
162 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 137.08
170 TestMultiControlPlane/serial/DeployApp 54.65
171 TestMultiControlPlane/serial/PingHostFromPods 1.77
172 TestMultiControlPlane/serial/AddWorkerNode 26.23
173 TestMultiControlPlane/serial/NodeLabels 0.12
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
175 TestMultiControlPlane/serial/CopyFile 19.95
176 TestMultiControlPlane/serial/StopSecondaryNode 11.77
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
178 TestMultiControlPlane/serial/RestartSecondaryNode 73.25
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 258.35
181 TestMultiControlPlane/serial/DeleteSecondaryNode 12.35
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
183 TestMultiControlPlane/serial/StopCluster 32.74
184 TestMultiControlPlane/serial/RestartCluster 86.53
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
186 TestMultiControlPlane/serial/AddSecondaryNode 42.83
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
190 TestImageBuild/serial/Setup 31.55
191 TestImageBuild/serial/NormalBuild 1.68
192 TestImageBuild/serial/BuildWithBuildArg 0.87
193 TestImageBuild/serial/BuildWithDockerIgnore 0.65
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.65
198 TestJSONOutput/start/Command 90.25
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.61
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.53
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 10.87
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.22
223 TestKicCustomNetwork/create_custom_network 35.19
224 TestKicCustomNetwork/use_default_bridge_network 33.39
225 TestKicExistingNetwork 35.65
226 TestKicCustomSubnet 36.12
227 TestKicStaticIP 32.45
228 TestMainNoArgs 0.05
229 TestMinikubeProfile 74.88
232 TestMountStart/serial/StartWithMountFirst 8.12
233 TestMountStart/serial/VerifyMountFirst 0.27
234 TestMountStart/serial/StartWithMountSecond 10.33
235 TestMountStart/serial/VerifyMountSecond 0.27
236 TestMountStart/serial/DeleteFirst 1.48
237 TestMountStart/serial/VerifyMountPostDelete 0.26
238 TestMountStart/serial/Stop 1.23
239 TestMountStart/serial/RestartStopped 8.54
240 TestMountStart/serial/VerifyMountPostStop 0.26
243 TestMultiNode/serial/FreshStart2Nodes 94.12
244 TestMultiNode/serial/DeployApp2Nodes 37.59
245 TestMultiNode/serial/PingHostFrom2Pods 1.04
246 TestMultiNode/serial/AddNode 20.46
247 TestMultiNode/serial/MultiNodeLabels 0.09
248 TestMultiNode/serial/ProfileList 0.35
249 TestMultiNode/serial/CopyFile 10.31
250 TestMultiNode/serial/StopNode 2.22
251 TestMultiNode/serial/StartAfterStop 11.39
252 TestMultiNode/serial/RestartKeepsNodes 89.52
253 TestMultiNode/serial/DeleteNode 5.46
254 TestMultiNode/serial/StopMultiNode 21.81
255 TestMultiNode/serial/RestartMultiNode 55.41
256 TestMultiNode/serial/ValidateNameConflict 38.86
261 TestPreload 139.43
263 TestScheduledStopUnix 106.6
264 TestSkaffold 117.29
266 TestInsufficientStorage 11.29
267 TestRunningBinaryUpgrade 105.04
270 TestMissingContainerUpgrade 157.52
272 TestPause/serial/Start 99.24
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
275 TestNoKubernetes/serial/StartWithK8s 31.45
276 TestNoKubernetes/serial/StartWithStopK8s 16.69
277 TestPause/serial/SecondStartNoReconfiguration 29.45
278 TestNoKubernetes/serial/Start 7.37
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
280 TestNoKubernetes/serial/ProfileList 1.47
281 TestNoKubernetes/serial/Stop 1.33
282 TestNoKubernetes/serial/StartNoArgs 8.58
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
284 TestPause/serial/Pause 0.71
296 TestPause/serial/VerifyStatus 0.38
297 TestPause/serial/Unpause 0.69
298 TestPause/serial/PauseAgain 1.06
299 TestPause/serial/DeletePaused 2.33
300 TestPause/serial/VerifyDeletedResources 0.14
301 TestStoppedBinaryUpgrade/Setup 0.64
302 TestStoppedBinaryUpgrade/Upgrade 98.96
310 TestNetworkPlugins/group/auto/Start 103.14
311 TestStoppedBinaryUpgrade/MinikubeLogs 1.71
312 TestNetworkPlugins/group/kindnet/Start 84.11
313 TestNetworkPlugins/group/auto/KubeletFlags 0.38
314 TestNetworkPlugins/group/auto/NetCatPod 14.38
315 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
316 TestNetworkPlugins/group/auto/DNS 0.21
317 TestNetworkPlugins/group/auto/Localhost 0.16
318 TestNetworkPlugins/group/auto/HairPin 0.16
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
320 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
321 TestNetworkPlugins/group/kindnet/DNS 0.3
322 TestNetworkPlugins/group/kindnet/Localhost 0.25
323 TestNetworkPlugins/group/kindnet/HairPin 0.19
324 TestNetworkPlugins/group/calico/Start 84.3
325 TestNetworkPlugins/group/custom-flannel/Start 70.19
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
327 TestNetworkPlugins/group/calico/ControllerPod 6.02
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
329 TestNetworkPlugins/group/calico/KubeletFlags 0.32
330 TestNetworkPlugins/group/calico/NetCatPod 11.32
331 TestNetworkPlugins/group/custom-flannel/DNS 0.22
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
334 TestNetworkPlugins/group/calico/DNS 0.27
335 TestNetworkPlugins/group/calico/Localhost 0.25
336 TestNetworkPlugins/group/calico/HairPin 0.29
337 TestNetworkPlugins/group/false/Start 70.57
338 TestNetworkPlugins/group/enable-default-cni/Start 55.78
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
341 TestNetworkPlugins/group/false/KubeletFlags 0.35
342 TestNetworkPlugins/group/false/NetCatPod 10.31
343 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
344 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
345 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
346 TestNetworkPlugins/group/false/DNS 0.28
347 TestNetworkPlugins/group/false/Localhost 0.31
348 TestNetworkPlugins/group/false/HairPin 0.24
349 TestNetworkPlugins/group/flannel/Start 73.3
350 TestNetworkPlugins/group/bridge/Start 61.02
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
352 TestNetworkPlugins/group/bridge/NetCatPod 11.35
353 TestNetworkPlugins/group/flannel/ControllerPod 6.01
354 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
355 TestNetworkPlugins/group/flannel/NetCatPod 10.26
356 TestNetworkPlugins/group/bridge/DNS 0.2
357 TestNetworkPlugins/group/bridge/Localhost 0.18
358 TestNetworkPlugins/group/bridge/HairPin 0.23
359 TestNetworkPlugins/group/flannel/DNS 0.3
360 TestNetworkPlugins/group/flannel/Localhost 0.26
361 TestNetworkPlugins/group/flannel/HairPin 0.24
362 TestNetworkPlugins/group/kubenet/Start 94.14
364 TestStartStop/group/old-k8s-version/serial/FirstStart 172.95
365 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
366 TestNetworkPlugins/group/kubenet/NetCatPod 10.27
367 TestNetworkPlugins/group/kubenet/DNS 0.18
368 TestNetworkPlugins/group/kubenet/Localhost 0.2
369 TestNetworkPlugins/group/kubenet/HairPin 0.19
371 TestStartStop/group/no-preload/serial/FirstStart 54.46
372 TestStartStop/group/no-preload/serial/DeployApp 9.36
373 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
374 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
375 TestStartStop/group/no-preload/serial/Stop 11.03
376 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
377 TestStartStop/group/old-k8s-version/serial/Stop 11.03
378 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
379 TestStartStop/group/no-preload/serial/SecondStart 272.04
380 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
381 TestStartStop/group/old-k8s-version/serial/SecondStart 131.28
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
385 TestStartStop/group/old-k8s-version/serial/Pause 2.74
387 TestStartStop/group/embed-certs/serial/FirstStart 53.02
388 TestStartStop/group/embed-certs/serial/DeployApp 7.39
389 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
390 TestStartStop/group/embed-certs/serial/Stop 11
391 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
392 TestStartStop/group/embed-certs/serial/SecondStart 266.6
393 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
395 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
396 TestStartStop/group/no-preload/serial/Pause 2.94
398 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.88
399 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
400 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
401 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
402 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
403 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.89
404 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
406 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
407 TestStartStop/group/embed-certs/serial/Pause 2.88
409 TestStartStop/group/newest-cni/serial/FirstStart 37.74
410 TestStartStop/group/newest-cni/serial/DeployApp 0
411 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
412 TestStartStop/group/newest-cni/serial/Stop 10.97
413 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
414 TestStartStop/group/newest-cni/serial/SecondStart 17.53
415 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
417 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
418 TestStartStop/group/newest-cni/serial/Pause 3.08
419 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
420 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
421 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
422 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
x
+
TestDownloadOnly/v1.20.0/json-events (6.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-336257 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-336257 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.206481056s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-336257
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-336257: exit status 85 (75.497914ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-336257 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |          |
	|         | -p download-only-336257        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:05:21
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:05:21.998534  884500 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:05:21.998684  884500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:21.998695  884500 out.go:304] Setting ErrFile to fd 2...
	I0806 07:05:21.998700  884500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:21.998932  884500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	W0806 07:05:21.999066  884500 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19370-879111/.minikube/config/config.json: open /home/jenkins/minikube-integration/19370-879111/.minikube/config/config.json: no such file or directory
	I0806 07:05:21.999512  884500 out.go:298] Setting JSON to true
	I0806 07:05:22.000374  884500 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17266,"bootTime":1722910656,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:05:22.000449  884500 start.go:139] virtualization:  
	I0806 07:05:22.004263  884500 out.go:97] [download-only-336257] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0806 07:05:22.004536  884500 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball: no such file or directory
	I0806 07:05:22.004607  884500 notify.go:220] Checking for updates...
	I0806 07:05:22.006534  884500 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:05:22.008889  884500 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:05:22.011023  884500 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:05:22.013357  884500 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:05:22.015714  884500 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0806 07:05:22.020740  884500 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:05:22.021030  884500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:05:22.052099  884500 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:05:22.052202  884500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:22.114628  884500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-06 07:05:22.105335873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:22.114746  884500 docker.go:307] overlay module found
	I0806 07:05:22.116962  884500 out.go:97] Using the docker driver based on user configuration
	I0806 07:05:22.116992  884500 start.go:297] selected driver: docker
	I0806 07:05:22.117000  884500 start.go:901] validating driver "docker" against <nil>
	I0806 07:05:22.117112  884500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:22.168719  884500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-06 07:05:22.159800705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:22.168892  884500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:05:22.169210  884500 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0806 07:05:22.169369  884500 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:05:22.171613  884500 out.go:169] Using Docker driver with root privileges
	I0806 07:05:22.173523  884500 cni.go:84] Creating CNI manager for ""
	I0806 07:05:22.173549  884500 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 07:05:22.173641  884500 start.go:340] cluster config:
	{Name:download-only-336257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-336257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:05:22.175572  884500 out.go:97] Starting "download-only-336257" primary control-plane node in "download-only-336257" cluster
	I0806 07:05:22.175604  884500 cache.go:121] Beginning downloading kic base image for docker with docker
	I0806 07:05:22.177325  884500 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0806 07:05:22.177350  884500 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 07:05:22.177525  884500 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0806 07:05:22.193334  884500 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0806 07:05:22.193518  884500 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0806 07:05:22.193629  884500 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0806 07:05:22.238637  884500 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 07:05:22.238662  884500 cache.go:56] Caching tarball of preloaded images
	I0806 07:05:22.240596  884500 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 07:05:22.242741  884500 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0806 07:05:22.242759  884500 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 07:05:22.329728  884500 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 07:05:25.185992  884500 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0806 07:05:26.534401  884500 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 07:05:26.534517  884500 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-336257 host does not exist
	  To start a cluster, run: "minikube start -p download-only-336257"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-336257
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-096579 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-096579 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.723375827s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-096579
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-096579: exit status 85 (73.756212ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-336257 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-336257        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-336257        | download-only-336257 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only        | download-only-096579 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-096579        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:05:28
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:05:28.615350  884707 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:05:28.615515  884707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:28.615528  884707 out.go:304] Setting ErrFile to fd 2...
	I0806 07:05:28.615534  884707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:28.615790  884707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:05:28.616232  884707 out.go:298] Setting JSON to true
	I0806 07:05:28.617059  884707 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17272,"bootTime":1722910656,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:05:28.617126  884707 start.go:139] virtualization:  
	I0806 07:05:28.619883  884707 out.go:97] [download-only-096579] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0806 07:05:28.620153  884707 notify.go:220] Checking for updates...
	I0806 07:05:28.622077  884707 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:05:28.623865  884707 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:05:28.625738  884707 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:05:28.627743  884707 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:05:28.631164  884707 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0806 07:05:28.635296  884707 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:05:28.635588  884707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:05:28.659129  884707 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:05:28.659273  884707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:28.718279  884707 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-06 07:05:28.708914466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:28.718392  884707 docker.go:307] overlay module found
	I0806 07:05:28.720334  884707 out.go:97] Using the docker driver based on user configuration
	I0806 07:05:28.720364  884707 start.go:297] selected driver: docker
	I0806 07:05:28.720371  884707 start.go:901] validating driver "docker" against <nil>
	I0806 07:05:28.720488  884707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:28.773642  884707 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-06 07:05:28.764867312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:28.773808  884707 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:05:28.774097  884707 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0806 07:05:28.774261  884707 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:05:28.776258  884707 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-096579 host does not exist
	  To start a cluster, run: "minikube start -p download-only-096579"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-096579
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (5.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-550445 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-550445 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.242521466s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (5.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-550445
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-550445: exit status 85 (73.736135ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-336257 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-336257           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-336257           | download-only-336257 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only           | download-only-096579 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-096579           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| delete  | -p download-only-096579           | download-only-096579 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC | 06 Aug 24 07:05 UTC |
	| start   | -o=json --download-only           | download-only-550445 | jenkins | v1.33.1 | 06 Aug 24 07:05 UTC |                     |
	|         | -p download-only-550445           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 07:05:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 07:05:33.746735  884914 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:05:33.746857  884914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:33.746868  884914 out.go:304] Setting ErrFile to fd 2...
	I0806 07:05:33.746874  884914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:05:33.747138  884914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:05:33.747666  884914 out.go:298] Setting JSON to true
	I0806 07:05:33.748508  884914 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17278,"bootTime":1722910656,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:05:33.748582  884914 start.go:139] virtualization:  
	I0806 07:05:33.751799  884914 out.go:97] [download-only-550445] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0806 07:05:33.752015  884914 notify.go:220] Checking for updates...
	I0806 07:05:33.754394  884914 out.go:169] MINIKUBE_LOCATION=19370
	I0806 07:05:33.756741  884914 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:05:33.758942  884914 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:05:33.761023  884914 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:05:33.763139  884914 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0806 07:05:33.766853  884914 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 07:05:33.767169  884914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:05:33.793634  884914 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:05:33.793726  884914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:33.850690  884914 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-06 07:05:33.841726023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:33.850800  884914 docker.go:307] overlay module found
	I0806 07:05:33.852962  884914 out.go:97] Using the docker driver based on user configuration
	I0806 07:05:33.852996  884914 start.go:297] selected driver: docker
	I0806 07:05:33.853005  884914 start.go:901] validating driver "docker" against <nil>
	I0806 07:05:33.853124  884914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:05:33.906872  884914 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-06 07:05:33.897336494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:05:33.907057  884914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 07:05:33.907348  884914 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0806 07:05:33.907538  884914 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 07:05:33.909560  884914 out.go:169] Using Docker driver with root privileges
	I0806 07:05:33.911130  884914 cni.go:84] Creating CNI manager for ""
	I0806 07:05:33.911174  884914 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 07:05:33.911184  884914 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 07:05:33.911282  884914 start.go:340] cluster config:
	{Name:download-only-550445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-550445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:05:33.912999  884914 out.go:97] Starting "download-only-550445" primary control-plane node in "download-only-550445" cluster
	I0806 07:05:33.913018  884914 cache.go:121] Beginning downloading kic base image for docker with docker
	I0806 07:05:33.914627  884914 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0806 07:05:33.914651  884914 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 07:05:33.914703  884914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0806 07:05:33.929732  884914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0806 07:05:33.929865  884914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0806 07:05:33.929885  884914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0806 07:05:33.929891  884914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0806 07:05:33.929898  884914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0806 07:05:33.972577  884914 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 07:05:33.972611  884914 cache.go:56] Caching tarball of preloaded images
	I0806 07:05:33.972782  884914 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 07:05:33.974897  884914 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0806 07:05:33.974920  884914 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 07:05:34.060553  884914 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 07:05:37.420112  884914 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 07:05:37.420243  884914 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-879111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 07:05:38.287855  884914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 07:05:38.288301  884914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/download-only-550445/config.json ...
	I0806 07:05:38.288344  884914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/download-only-550445/config.json: {Name:mkb5136b69456a2421bb9857c47b75b204cc8ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 07:05:38.289016  884914 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 07:05:38.289249  884914 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19370-879111/.minikube/cache/linux/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-550445 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550445"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-550445
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-462305 --alsologtostderr --binary-mirror http://127.0.0.1:43619 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-462305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-462305
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (59.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-705695 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
E0806 07:49:32.075252  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-705695 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (57.360781559s)
helpers_test.go:175: Cleaning up "offline-docker-705695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-705695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-705695: (2.328542809s)
--- PASS: TestOffline (59.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-657623
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-657623: exit status 85 (68.756127ms)

                                                
                                                
-- stdout --
	* Profile "addons-657623" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657623"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-657623
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-657623: exit status 85 (68.157708ms)

                                                
                                                
-- stdout --
	* Profile "addons-657623" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657623"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (231.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-657623 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-657623 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m51.081712885s)
--- PASS: TestAddons/Setup (231.08s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 66.300715ms
addons_test.go:913: volcano-controller stabilized in 66.4183ms
addons_test.go:897: volcano-scheduler stabilized in 67.111174ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-rj5zh" [5460df3c-055f-4e8f-af6f-565e78b0dcd2] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004344085s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-2qznb" [da8c626b-1c20-43e5-b8fb-4b7fc4ac14d9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003506013s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-6q4xg" [c88dfc32-1871-49d9-9d61-9a6b62811f9b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004651094s
addons_test.go:932: (dbg) Run:  kubectl --context addons-657623 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-657623 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-657623 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [23bceb83-c962-43d7-9346-bddcd66545e1] Pending
helpers_test.go:344: "test-job-nginx-0" [23bceb83-c962-43d7-9346-bddcd66545e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [23bceb83-c962-43d7-9346-bddcd66545e1] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003347872s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable volcano --alsologtostderr -v=1: (10.234664751s)
--- PASS: TestAddons/serial/Volcano (40.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-657623 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-657623 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.187585ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-ldwqh" [76194a63-21f6-4e11-b330-94aefb54502d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009858607s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9fgbv" [73ec8fde-0c6f-42c6-bd62-c3337e2cc3b9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004868222s
addons_test.go:342: (dbg) Run:  kubectl --context addons-657623 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-657623 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-657623 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.122028803s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 ip
2024/08/06 07:10:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-657623 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-657623 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-657623 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9200f299-d177-40fd-9f97-e53577685f86] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9200f299-d177-40fd-9f97-e53577685f86] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003727366s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-657623 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable ingress-dns --alsologtostderr -v=1: (1.676773737s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable ingress --alsologtostderr -v=1: (7.71364383s)
--- PASS: TestAddons/parallel/Ingress (21.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nw7jm" [61ecf3f0-f646-4b57-bfff-6409430923b8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.009635823s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-657623
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-657623: (5.808681665s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.826181ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-dlxbq" [f138632d-cb2a-4311-aed0-fe31b0af8685] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004911715s
addons_test.go:417: (dbg) Run:  kubectl --context addons-657623 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.675921ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-657623 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-657623 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9ff52a87-1a5a-4c2d-a566-c5c45ed3bab7] Pending
helpers_test.go:344: "task-pv-pod" [9ff52a87-1a5a-4c2d-a566-c5c45ed3bab7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9ff52a87-1a5a-4c2d-a566-c5c45ed3bab7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003370474s
addons_test.go:590: (dbg) Run:  kubectl --context addons-657623 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657623 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657623 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-657623 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-657623 delete pod task-pv-pod: (1.131694693s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-657623 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-657623 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-657623 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ba162278-d11b-4bcb-ad42-288c084cae0d] Pending
helpers_test.go:344: "task-pv-pod-restore" [ba162278-d11b-4bcb-ad42-288c084cae0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ba162278-d11b-4bcb-ad42-288c084cae0d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004805353s
addons_test.go:632: (dbg) Run:  kubectl --context addons-657623 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-657623 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-657623 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.810630277s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-657623 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-jwnwx" [3c103534-3a86-4c2b-baa6-995925af9379] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-jwnwx" [3c103534-3a86-4c2b-baa6-995925af9379] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003999919s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable headlamp --alsologtostderr -v=1: (5.665152968s)
--- PASS: TestAddons/parallel/Headlamp (17.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-m4x64" [d4811c0f-755e-4c98-bfb1-92e3e1bd6861] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003889514s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-657623
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-657623 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-657623 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7bdabe33-f45f-4563-ba97-407e4f236c85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7bdabe33-f45f-4563-ba97-407e4f236c85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7bdabe33-f45f-4563-ba97-407e4f236c85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003780372s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-657623 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 ssh "cat /opt/local-path-provisioner/pvc-4973c9ae-9f1b-44e8-a6e8-0924885e5c8e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-657623 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-657623 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.485562726s)
--- PASS: TestAddons/parallel/LocalPath (54.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8hdtg" [62baf372-403d-443f-8719-82acb1937158] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005682689s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-657623
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-x5zv5" [569f95a4-2d6a-48e7-b96c-4b8c24d27f72] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003950677s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-657623 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-657623 addons disable yakd --alsologtostderr -v=1: (5.697272858s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-657623
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-657623: (10.935011405s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-657623
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-657623
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-657623
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (36.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-256388 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-256388 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.996957134s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-256388 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-256388 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-256388 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-256388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-256388
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-256388: (2.190654961s)
--- PASS: TestCertOptions (36.87s)

                                                
                                    
x
+
TestCertExpiration (250.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834751 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834751 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (44.010233105s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834751 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834751 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.699598452s)
helpers_test.go:175: Cleaning up "cert-expiration-834751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-834751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-834751: (2.365885199s)
--- PASS: TestCertExpiration (250.08s)

                                                
                                    
x
+
TestDockerFlags (46.42s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-968082 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-968082 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.581831112s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-968082 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-968082 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-968082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-968082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-968082: (2.135146901s)
--- PASS: TestDockerFlags (46.42s)

                                                
                                    
x
+
TestForceSystemdFlag (47.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-101614 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-101614 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.345749657s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-101614 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-101614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-101614
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-101614: (2.352272645s)
--- PASS: TestForceSystemdFlag (47.14s)

                                                
                                    
x
+
TestForceSystemdEnv (46.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-609694 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-609694 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.858483656s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-609694 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-609694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-609694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-609694: (2.256055984s)
--- PASS: TestForceSystemdEnv (46.55s)

                                                
                                    
x
+
TestErrorSpam/setup (31.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-744826 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-744826 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-744826 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-744826 --driver=docker  --container-runtime=docker: (31.893049849s)
--- PASS: TestErrorSpam/setup (31.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 stop: (1.840079843s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-744826 --log_dir /tmp/nospam-744826 stop
--- PASS: TestErrorSpam/stop (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19370-879111/.minikube/files/etc/test/nested/copy/884495/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-674935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (47.48201939s)
--- PASS: TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --alsologtostderr -v=8
E0806 07:14:32.077311  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.084226  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.094488  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.114758  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.155104  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.235414  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.395820  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:32.716472  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:33.357526  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-674935 --alsologtostderr -v=8: (29.829517489s)
functional_test.go:659: soft start took 29.83334939s for "functional-674935" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-674935 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:3.1
E0806 07:14:34.637705  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:3.1: (1.146888005s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:3.3: (1.067060715s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 cache add registry.k8s.io/pause:latest: (1.01575272s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-674935 /tmp/TestFunctionalserialCacheCmdcacheadd_local4224259514/001
E0806 07:14:37.198045  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache add minikube-local-cache-test:functional-674935
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache delete minikube-local-cache-test:functional-674935
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-674935
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.367579ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 kubectl -- --context functional-674935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-674935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0806 07:14:42.318291  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:14:52.558558  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:15:13.038809  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-674935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.483214538s)
functional_test.go:757: restart took 43.483737594s for "functional-674935" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-674935 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 logs: (1.188496032s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 logs --file /tmp/TestFunctionalserialLogsFileCmd3609530036/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 logs --file /tmp/TestFunctionalserialLogsFileCmd3609530036/001/logs.txt: (1.220488101s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-674935 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-674935
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-674935: exit status 115 (726.22763ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32477 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-674935 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 config get cpus: exit status 14 (91.479942ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 config get cpus: exit status 14 (69.107118ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-674935 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-674935 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 924256: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-674935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (264.616929ms)

                                                
                                                
-- stdout --
	* [functional-674935] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:16:06.315277  923852 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:16:06.315509  923852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:16:06.315533  923852 out.go:304] Setting ErrFile to fd 2...
	I0806 07:16:06.315551  923852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:16:06.315871  923852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:16:06.316282  923852 out.go:298] Setting JSON to false
	I0806 07:16:06.317340  923852 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17910,"bootTime":1722910656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:16:06.317421  923852 start.go:139] virtualization:  
	I0806 07:16:06.320226  923852 out.go:177] * [functional-674935] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0806 07:16:06.323397  923852 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:16:06.323566  923852 notify.go:220] Checking for updates...
	I0806 07:16:06.329794  923852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:16:06.333257  923852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:16:06.335433  923852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:16:06.337737  923852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0806 07:16:06.339626  923852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:16:06.341720  923852 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:16:06.342299  923852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:16:06.370953  923852 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:16:06.371066  923852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:16:06.492735  923852 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-06 07:16:06.477122721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:16:06.492853  923852 docker.go:307] overlay module found
	I0806 07:16:06.497264  923852 out.go:177] * Using the docker driver based on existing profile
	I0806 07:16:06.499612  923852 start.go:297] selected driver: docker
	I0806 07:16:06.499636  923852 start.go:901] validating driver "docker" against &{Name:functional-674935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-674935 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:16:06.499744  923852 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:16:06.502672  923852 out.go:177] 
	W0806 07:16:06.505171  923852 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0806 07:16:06.507025  923852 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-674935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-674935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (220.580959ms)

                                                
                                                
-- stdout --
	* [functional-674935] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:16:06.097152  923808 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:16:06.097333  923808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:16:06.097346  923808 out.go:304] Setting ErrFile to fd 2...
	I0806 07:16:06.097352  923808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:16:06.097782  923808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:16:06.098248  923808 out.go:298] Setting JSON to false
	I0806 07:16:06.099424  923808 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17910,"bootTime":1722910656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0806 07:16:06.099541  923808 start.go:139] virtualization:  
	I0806 07:16:06.102453  923808 out.go:177] * [functional-674935] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0806 07:16:06.105209  923808 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 07:16:06.105950  923808 notify.go:220] Checking for updates...
	I0806 07:16:06.110190  923808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 07:16:06.112426  923808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	I0806 07:16:06.115109  923808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	I0806 07:16:06.117393  923808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0806 07:16:06.119608  923808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 07:16:06.122244  923808 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:16:06.122852  923808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 07:16:06.147126  923808 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0806 07:16:06.147259  923808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:16:06.225145  923808 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-06 07:16:06.215393904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:16:06.225305  923808 docker.go:307] overlay module found
	I0806 07:16:06.229865  923808 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0806 07:16:06.232787  923808 start.go:297] selected driver: docker
	I0806 07:16:06.232809  923808 start.go:901] validating driver "docker" against &{Name:functional-674935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-674935 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 07:16:06.232940  923808 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 07:16:06.236124  923808 out.go:177] 
	W0806 07:16:06.239572  923808 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0806 07:16:06.241723  923808 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-674935 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-674935 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-ts2b9" [d13dd945-9cec-402f-8e06-dd5a07ebc87f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-ts2b9" [d13dd945-9cec-402f-8e06-dd5a07ebc87f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003879569s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32122
functional_test.go:1671: http://192.168.49.2:32122: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-ts2b9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32122
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33f63032-3e1f-4a39-a575-f2d011b57abf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004992396s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-674935 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-674935 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-674935 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-674935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6bede41c-491c-4ab4-b29e-337ad520c8fd] Pending
helpers_test.go:344: "sp-pod" [6bede41c-491c-4ab4-b29e-337ad520c8fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6bede41c-491c-4ab4-b29e-337ad520c8fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004028793s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-674935 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-674935 delete -f testdata/storage-provisioner/pod.yaml
E0806 07:15:53.999046  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-674935 delete -f testdata/storage-provisioner/pod.yaml: (1.314023497s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-674935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ccb4b026-2ec4-4a2d-8c62-b9384936f094] Pending
helpers_test.go:344: "sp-pod" [ccb4b026-2ec4-4a2d-8c62-b9384936f094] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ccb4b026-2ec4-4a2d-8c62-b9384936f094] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003903218s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-674935 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh -n functional-674935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cp functional-674935:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2170541909/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh -n functional-674935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh -n functional-674935 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/884495/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /etc/test/nested/copy/884495/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/884495.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /etc/ssl/certs/884495.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/884495.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /usr/share/ca-certificates/884495.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8844952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /etc/ssl/certs/8844952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8844952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /usr/share/ca-certificates/8844952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-674935 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh "sudo systemctl is-active crio": exit status 1 (273.076022ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 921091: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-674935 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4886492b-b785-4699-ba50-0200130187f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4886492b-b785-4699-ba50-0200130187f0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00426773s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-674935 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.120.192 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-674935 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-674935 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-674935 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-t98lv" [e91f07fc-e010-4a2b-8740-19f1bbd4d6a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-t98lv" [e91f07fc-e010-4a2b-8740-19f1bbd4d6a5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004449305s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "414.866584ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "93.633583ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service list -o json
functional_test.go:1490: Took "661.630075ms" to run "out/minikube-linux-arm64 -p functional-674935 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "393.483809ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "80.033967ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31715
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdany-port2446712333/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722928563525201275" to /tmp/TestFunctionalparallelMountCmdany-port2446712333/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722928563525201275" to /tmp/TestFunctionalparallelMountCmdany-port2446712333/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722928563525201275" to /tmp/TestFunctionalparallelMountCmdany-port2446712333/001/test-1722928563525201275
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (477.092864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 test-1722928563525201275
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh cat /mount-9p/test-1722928563525201275
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-674935 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [123492ba-c803-4638-97ec-2a44f2766822] Pending
helpers_test.go:344: "busybox-mount" [123492ba-c803-4638-97ec-2a44f2766822] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [123492ba-c803-4638-97ec-2a44f2766822] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [123492ba-c803-4638-97ec-2a44f2766822] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004077987s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-674935 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdany-port2446712333/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31715
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdspecific-port3878572045/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (542.547368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdspecific-port3878572045/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh "sudo umount -f /mount-9p": exit status 1 (342.937585ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-674935 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdspecific-port3878572045/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T" /mount1: exit status 1 (1.160891325s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-674935 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-674935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2956947832/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 version -o=json --components: (1.285598758s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-674935 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-674935
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-674935
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-674935 image ls --format short --alsologtostderr:
I0806 07:16:24.204414  927088 out.go:291] Setting OutFile to fd 1 ...
I0806 07:16:24.204594  927088 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.204606  927088 out.go:304] Setting ErrFile to fd 2...
I0806 07:16:24.204611  927088 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.204898  927088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
I0806 07:16:24.205563  927088 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.205711  927088 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.206223  927088 cli_runner.go:164] Run: docker container inspect functional-674935 --format={{.State.Status}}
I0806 07:16:24.233089  927088 ssh_runner.go:195] Run: systemctl --version
I0806 07:16:24.233154  927088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-674935
I0806 07:16:24.253576  927088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/functional-674935/id_rsa Username:docker}
I0806 07:16:24.347971  927088 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-674935 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-674935 | 12498d4e0e419 | 30B    |
| docker.io/kicbase/echo-server               | functional-674935 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-674935 image ls --format table --alsologtostderr:
I0806 07:16:24.751075  927246 out.go:291] Setting OutFile to fd 1 ...
I0806 07:16:24.751253  927246 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.751265  927246 out.go:304] Setting ErrFile to fd 2...
I0806 07:16:24.751271  927246 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.751533  927246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
I0806 07:16:24.752150  927246 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.752282  927246 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.752737  927246 cli_runner.go:164] Run: docker container inspect functional-674935 --format={{.State.Status}}
I0806 07:16:24.769399  927246 ssh_runner.go:195] Run: systemctl --version
I0806 07:16:24.769461  927246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-674935
I0806 07:16:24.802620  927246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/functional-674935/id_rsa Username:docker}
I0806 07:16:24.908272  927246 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-674935 image ls --format json --alsologtostderr:
[{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDig
ests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybo
x:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"12498d4e0e419c03629ce1f0635ca20976a8aa25f98b212fa64b36df2e764088","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-674935"],"size":"30"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-674935"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"s
ize":"29000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-674935 image ls --format json --alsologtostderr:
I0806 07:16:24.472545  927159 out.go:291] Setting OutFile to fd 1 ...
I0806 07:16:24.473080  927159 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.473099  927159 out.go:304] Setting ErrFile to fd 2...
I0806 07:16:24.473105  927159 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.473356  927159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
I0806 07:16:24.474048  927159 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.474199  927159 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.474808  927159 cli_runner.go:164] Run: docker container inspect functional-674935 --format={{.State.Status}}
I0806 07:16:24.504155  927159 ssh_runner.go:195] Run: systemctl --version
I0806 07:16:24.504236  927159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-674935
I0806 07:16:24.538950  927159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/functional-674935/id_rsa Username:docker}
I0806 07:16:24.650355  927159 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-674935 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 12498d4e0e419c03629ce1f0635ca20976a8aa25f98b212fa64b36df2e764088
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-674935
size: "30"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-674935
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-674935 image ls --format yaml --alsologtostderr:
I0806 07:16:24.225088  927094 out.go:291] Setting OutFile to fd 1 ...
I0806 07:16:24.225583  927094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.225624  927094 out.go:304] Setting ErrFile to fd 2...
I0806 07:16:24.225644  927094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.225950  927094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
I0806 07:16:24.226669  927094 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.226852  927094 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.228304  927094 cli_runner.go:164] Run: docker container inspect functional-674935 --format={{.State.Status}}
I0806 07:16:24.248588  927094 ssh_runner.go:195] Run: systemctl --version
I0806 07:16:24.248650  927094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-674935
I0806 07:16:24.274435  927094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/functional-674935/id_rsa Username:docker}
I0806 07:16:24.374510  927094 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-674935 ssh pgrep buildkitd: exit status 1 (327.319795ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image build -t localhost/my-image:functional-674935 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-674935 image build -t localhost/my-image:functional-674935 testdata/build --alsologtostderr: (2.143615167s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-674935 image build -t localhost/my-image:functional-674935 testdata/build --alsologtostderr:
I0806 07:16:24.786484  927253 out.go:291] Setting OutFile to fd 1 ...
I0806 07:16:24.787579  927253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.787600  927253 out.go:304] Setting ErrFile to fd 2...
I0806 07:16:24.787607  927253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 07:16:24.787879  927253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
I0806 07:16:24.788581  927253 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.790885  927253 config.go:182] Loaded profile config "functional-674935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 07:16:24.791423  927253 cli_runner.go:164] Run: docker container inspect functional-674935 --format={{.State.Status}}
I0806 07:16:24.817887  927253 ssh_runner.go:195] Run: systemctl --version
I0806 07:16:24.817940  927253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-674935
I0806 07:16:24.841637  927253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/functional-674935/id_rsa Username:docker}
I0806 07:16:24.940349  927253 build_images.go:161] Building image from path: /tmp/build.651108701.tar
I0806 07:16:24.940416  927253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0806 07:16:24.955886  927253 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.651108701.tar
I0806 07:16:24.959383  927253 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.651108701.tar: stat -c "%s %y" /var/lib/minikube/build/build.651108701.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.651108701.tar': No such file or directory
I0806 07:16:24.959411  927253 ssh_runner.go:362] scp /tmp/build.651108701.tar --> /var/lib/minikube/build/build.651108701.tar (3072 bytes)
I0806 07:16:24.986508  927253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.651108701
I0806 07:16:24.995596  927253 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.651108701 -xf /var/lib/minikube/build/build.651108701.tar
I0806 07:16:25.013841  927253 docker.go:360] Building image: /var/lib/minikube/build/build.651108701
I0806 07:16:25.013935  927253 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-674935 /var/lib/minikube/build/build.651108701
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:eff011615ffb9e600594f37798c1631f5613452cd6ca2d8e3291a3918b261ee9 done
#8 naming to localhost/my-image:functional-674935 done
#8 DONE 0.1s
I0806 07:16:26.832712  927253 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-674935 /var/lib/minikube/build/build.651108701: (1.818747015s)
I0806 07:16:26.832784  927253 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.651108701
I0806 07:16:26.841957  927253 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.651108701.tar
I0806 07:16:26.851036  927253 build_images.go:217] Built localhost/my-image:functional-674935 from /tmp/build.651108701.tar
I0806 07:16:26.851077  927253 build_images.go:133] succeeded building to: functional-674935
I0806 07:16:26.851083  927253 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-674935
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image load --daemon docker.io/kicbase/echo-server:functional-674935 --alsologtostderr
2024/08/06 07:16:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-674935 docker-env) && out/minikube-linux-arm64 status -p functional-674935"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-674935 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image load --daemon docker.io/kicbase/echo-server:functional-674935 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-674935
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image load --daemon docker.io/kicbase/echo-server:functional-674935 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image save docker.io/kicbase/echo-server:functional-674935 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image rm docker.io/kicbase/echo-server:functional-674935 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-674935
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-674935 image save --daemon docker.io/kicbase/echo-server:functional-674935 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-674935
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-674935
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-674935
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-674935
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-484440 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0806 07:17:15.920105  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-484440 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m16.159079793s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (137.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (54.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-484440 -- rollout status deployment/busybox: (7.176684519s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0806 07:19:32.074438  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-64xjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-c5xq5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-wt2h7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-64xjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-c5xq5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-wt2h7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-64xjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-c5xq5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-wt2h7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (54.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-64xjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-64xjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-c5xq5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-c5xq5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-wt2h7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-484440 -- exec busybox-fc5497c4f-wt2h7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-484440 -v=7 --alsologtostderr
E0806 07:19:59.760306  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-484440 -v=7 --alsologtostderr: (25.179424339s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr: (1.046574611s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-484440 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 status --output json -v=7 --alsologtostderr: (1.072063261s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp testdata/cp-test.txt ha-484440:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3559036058/001/cp-test_ha-484440.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440:/home/docker/cp-test.txt ha-484440-m02:/home/docker/cp-test_ha-484440_ha-484440-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test_ha-484440_ha-484440-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440:/home/docker/cp-test.txt ha-484440-m03:/home/docker/cp-test_ha-484440_ha-484440-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test_ha-484440_ha-484440-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440:/home/docker/cp-test.txt ha-484440-m04:/home/docker/cp-test_ha-484440_ha-484440-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test_ha-484440_ha-484440-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp testdata/cp-test.txt ha-484440-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3559036058/001/cp-test_ha-484440-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m02:/home/docker/cp-test.txt ha-484440:/home/docker/cp-test_ha-484440-m02_ha-484440.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test_ha-484440-m02_ha-484440.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m02:/home/docker/cp-test.txt ha-484440-m03:/home/docker/cp-test_ha-484440-m02_ha-484440-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test_ha-484440-m02_ha-484440-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m02:/home/docker/cp-test.txt ha-484440-m04:/home/docker/cp-test_ha-484440-m02_ha-484440-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test_ha-484440-m02_ha-484440-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp testdata/cp-test.txt ha-484440-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3559036058/001/cp-test_ha-484440-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m03:/home/docker/cp-test.txt ha-484440:/home/docker/cp-test_ha-484440-m03_ha-484440.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test_ha-484440-m03_ha-484440.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m03:/home/docker/cp-test.txt ha-484440-m02:/home/docker/cp-test_ha-484440-m03_ha-484440-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test_ha-484440-m03_ha-484440-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m03:/home/docker/cp-test.txt ha-484440-m04:/home/docker/cp-test_ha-484440-m03_ha-484440-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test_ha-484440-m03_ha-484440-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp testdata/cp-test.txt ha-484440-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3559036058/001/cp-test_ha-484440-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m04:/home/docker/cp-test.txt ha-484440:/home/docker/cp-test_ha-484440-m04_ha-484440.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440 "sudo cat /home/docker/cp-test_ha-484440-m04_ha-484440.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m04:/home/docker/cp-test.txt ha-484440-m02:/home/docker/cp-test_ha-484440-m04_ha-484440-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m02 "sudo cat /home/docker/cp-test_ha-484440-m04_ha-484440-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 cp ha-484440-m04:/home/docker/cp-test.txt ha-484440-m03:/home/docker/cp-test_ha-484440-m04_ha-484440-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 ssh -n ha-484440-m03 "sudo cat /home/docker/cp-test_ha-484440-m04_ha-484440-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 node stop m02 -v=7 --alsologtostderr
E0806 07:20:33.029058  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.034432  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.044779  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.065062  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.105401  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.185937  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.346357  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:33.666926  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:34.307883  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:35.588108  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:38.149732  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 node stop m02 -v=7 --alsologtostderr: (10.990207338s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr: exit status 7 (782.437734ms)

                                                
                                                
-- stdout --
	ha-484440
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-484440-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-484440-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-484440-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:20:41.374118  950040 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:20:41.374227  950040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:20:41.374237  950040 out.go:304] Setting ErrFile to fd 2...
	I0806 07:20:41.374242  950040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:20:41.374483  950040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:20:41.374667  950040 out.go:298] Setting JSON to false
	I0806 07:20:41.374707  950040 mustload.go:65] Loading cluster: ha-484440
	I0806 07:20:41.374835  950040 notify.go:220] Checking for updates...
	I0806 07:20:41.375118  950040 config.go:182] Loaded profile config "ha-484440": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:20:41.375129  950040 status.go:255] checking status of ha-484440 ...
	I0806 07:20:41.375674  950040 cli_runner.go:164] Run: docker container inspect ha-484440 --format={{.State.Status}}
	I0806 07:20:41.400581  950040 status.go:330] ha-484440 host status = "Running" (err=<nil>)
	I0806 07:20:41.400610  950040 host.go:66] Checking if "ha-484440" exists ...
	I0806 07:20:41.401026  950040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-484440
	I0806 07:20:41.427652  950040 host.go:66] Checking if "ha-484440" exists ...
	I0806 07:20:41.428057  950040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:20:41.428169  950040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-484440
	I0806 07:20:41.457778  950040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33573 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/ha-484440/id_rsa Username:docker}
	I0806 07:20:41.568723  950040 ssh_runner.go:195] Run: systemctl --version
	I0806 07:20:41.574143  950040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:20:41.587715  950040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:20:41.651776  950040 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-06 07:20:41.641046762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:20:41.652377  950040 kubeconfig.go:125] found "ha-484440" server: "https://192.168.49.254:8443"
	I0806 07:20:41.652413  950040 api_server.go:166] Checking apiserver status ...
	I0806 07:20:41.652462  950040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:20:41.665104  950040 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2390/cgroup
	I0806 07:20:41.674923  950040 api_server.go:182] apiserver freezer: "3:freezer:/docker/06ebc0dd395335d83d326ffdd4c698754e2ab22b533adbf4339350e89242fdf7/kubepods/burstable/pod7d894e95af26399ece2b99d24c22668f/3f7afe1f4adf0c15d2cdb3936afb89d7dcb1b7f79a7fce7143a4cb09da2d782c"
	I0806 07:20:41.674998  950040 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06ebc0dd395335d83d326ffdd4c698754e2ab22b533adbf4339350e89242fdf7/kubepods/burstable/pod7d894e95af26399ece2b99d24c22668f/3f7afe1f4adf0c15d2cdb3936afb89d7dcb1b7f79a7fce7143a4cb09da2d782c/freezer.state
	I0806 07:20:41.687206  950040 api_server.go:204] freezer state: "THAWED"
	I0806 07:20:41.687241  950040 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0806 07:20:41.695437  950040 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0806 07:20:41.695554  950040 status.go:422] ha-484440 apiserver status = Running (err=<nil>)
	I0806 07:20:41.695567  950040 status.go:257] ha-484440 status: &{Name:ha-484440 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:20:41.695585  950040 status.go:255] checking status of ha-484440-m02 ...
	I0806 07:20:41.695902  950040 cli_runner.go:164] Run: docker container inspect ha-484440-m02 --format={{.State.Status}}
	I0806 07:20:41.718760  950040 status.go:330] ha-484440-m02 host status = "Stopped" (err=<nil>)
	I0806 07:20:41.718788  950040 status.go:343] host is not running, skipping remaining checks
	I0806 07:20:41.718795  950040 status.go:257] ha-484440-m02 status: &{Name:ha-484440-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:20:41.718816  950040 status.go:255] checking status of ha-484440-m03 ...
	I0806 07:20:41.719133  950040 cli_runner.go:164] Run: docker container inspect ha-484440-m03 --format={{.State.Status}}
	I0806 07:20:41.738957  950040 status.go:330] ha-484440-m03 host status = "Running" (err=<nil>)
	I0806 07:20:41.739004  950040 host.go:66] Checking if "ha-484440-m03" exists ...
	I0806 07:20:41.739294  950040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-484440-m03
	I0806 07:20:41.758268  950040 host.go:66] Checking if "ha-484440-m03" exists ...
	I0806 07:20:41.758577  950040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:20:41.758617  950040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-484440-m03
	I0806 07:20:41.776971  950040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/ha-484440-m03/id_rsa Username:docker}
	I0806 07:20:41.872815  950040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:20:41.886146  950040 kubeconfig.go:125] found "ha-484440" server: "https://192.168.49.254:8443"
	I0806 07:20:41.886180  950040 api_server.go:166] Checking apiserver status ...
	I0806 07:20:41.886235  950040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:20:41.904485  950040 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2267/cgroup
	I0806 07:20:41.914746  950040 api_server.go:182] apiserver freezer: "3:freezer:/docker/4b8d799b6a93baffdbe315ed210c63b0b8d1e4fbb659f7ffeb2fbf6d52bf325f/kubepods/burstable/podec85634f7579c950a1e35ab7d9140638/7020e338d31c32605cc2a1cae6b8b28031bad547c8edfcadfc19e405cbfb4805"
	I0806 07:20:41.914818  950040 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4b8d799b6a93baffdbe315ed210c63b0b8d1e4fbb659f7ffeb2fbf6d52bf325f/kubepods/burstable/podec85634f7579c950a1e35ab7d9140638/7020e338d31c32605cc2a1cae6b8b28031bad547c8edfcadfc19e405cbfb4805/freezer.state
	I0806 07:20:41.924202  950040 api_server.go:204] freezer state: "THAWED"
	I0806 07:20:41.924235  950040 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0806 07:20:41.932041  950040 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0806 07:20:41.932074  950040 status.go:422] ha-484440-m03 apiserver status = Running (err=<nil>)
	I0806 07:20:41.932085  950040 status.go:257] ha-484440-m03 status: &{Name:ha-484440-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:20:41.932101  950040 status.go:255] checking status of ha-484440-m04 ...
	I0806 07:20:41.932427  950040 cli_runner.go:164] Run: docker container inspect ha-484440-m04 --format={{.State.Status}}
	I0806 07:20:41.950133  950040 status.go:330] ha-484440-m04 host status = "Running" (err=<nil>)
	I0806 07:20:41.950161  950040 host.go:66] Checking if "ha-484440-m04" exists ...
	I0806 07:20:41.950461  950040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-484440-m04
	I0806 07:20:41.967599  950040 host.go:66] Checking if "ha-484440-m04" exists ...
	I0806 07:20:41.967924  950040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:20:41.967972  950040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-484440-m04
	I0806 07:20:41.986129  950040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/ha-484440-m04/id_rsa Username:docker}
	I0806 07:20:42.082244  950040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:20:42.101652  950040 status.go:257] ha-484440-m04 status: &{Name:ha-484440-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (73.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 node start m02 -v=7 --alsologtostderr
E0806 07:20:43.270177  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:20:53.511005  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:21:13.991235  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 node start m02 -v=7 --alsologtostderr: (1m12.078335871s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
E0806 07:21:54.951921  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr: (1.06595953s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (73.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-484440 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-484440 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-484440 -v=7 --alsologtostderr: (34.273755036s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-484440 --wait=true -v=7 --alsologtostderr
E0806 07:23:16.872105  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:24:32.075034  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 07:25:33.029349  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:26:00.712933  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-484440 --wait=true -v=7 --alsologtostderr: (3m43.922221261s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-484440
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 node delete m03 -v=7 --alsologtostderr: (11.414373778s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 stop -v=7 --alsologtostderr: (32.630324037s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr: exit status 7 (108.301143ms)

                                                
                                                
-- stdout --
	ha-484440
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-484440-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-484440-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:27:00.723153  978177 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:27:00.723382  978177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:27:00.723409  978177 out.go:304] Setting ErrFile to fd 2...
	I0806 07:27:00.723429  978177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:27:00.723740  978177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:27:00.723967  978177 out.go:298] Setting JSON to false
	I0806 07:27:00.724040  978177 mustload.go:65] Loading cluster: ha-484440
	I0806 07:27:00.724143  978177 notify.go:220] Checking for updates...
	I0806 07:27:00.724529  978177 config.go:182] Loaded profile config "ha-484440": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:27:00.724567  978177 status.go:255] checking status of ha-484440 ...
	I0806 07:27:00.725375  978177 cli_runner.go:164] Run: docker container inspect ha-484440 --format={{.State.Status}}
	I0806 07:27:00.743332  978177 status.go:330] ha-484440 host status = "Stopped" (err=<nil>)
	I0806 07:27:00.743364  978177 status.go:343] host is not running, skipping remaining checks
	I0806 07:27:00.743372  978177 status.go:257] ha-484440 status: &{Name:ha-484440 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:27:00.743402  978177 status.go:255] checking status of ha-484440-m02 ...
	I0806 07:27:00.743768  978177 cli_runner.go:164] Run: docker container inspect ha-484440-m02 --format={{.State.Status}}
	I0806 07:27:00.759934  978177 status.go:330] ha-484440-m02 host status = "Stopped" (err=<nil>)
	I0806 07:27:00.759953  978177 status.go:343] host is not running, skipping remaining checks
	I0806 07:27:00.759961  978177 status.go:257] ha-484440-m02 status: &{Name:ha-484440-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:27:00.759979  978177 status.go:255] checking status of ha-484440-m04 ...
	I0806 07:27:00.760273  978177 cli_runner.go:164] Run: docker container inspect ha-484440-m04 --format={{.State.Status}}
	I0806 07:27:00.782035  978177 status.go:330] ha-484440-m04 host status = "Stopped" (err=<nil>)
	I0806 07:27:00.782059  978177 status.go:343] host is not running, skipping remaining checks
	I0806 07:27:00.782081  978177 status.go:257] ha-484440-m04 status: &{Name:ha-484440-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-484440 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-484440 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.567604539s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-484440 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-484440 --control-plane -v=7 --alsologtostderr: (41.733284392s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-484440 status -v=7 --alsologtostderr: (1.094684205s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-094567 --driver=docker  --container-runtime=docker
E0806 07:29:32.075362  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-094567 --driver=docker  --container-runtime=docker: (31.54872619s)
--- PASS: TestImageBuild/serial/Setup (31.55s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-094567
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-094567: (1.676666705s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-094567
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-094567
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-094567
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-969849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0806 07:30:33.029079  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 07:30:55.120737  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-969849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m30.243272002s)
--- PASS: TestJSONOutput/start/Command (90.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-969849 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-969849 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-969849 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-969849 --output=json --user=testUser: (10.87421756s)
--- PASS: TestJSONOutput/stop/Command (10.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-221084 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-221084 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.568367ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ddcab380-4a40-4a2e-aa21-811748d58c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-221084] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cde6fc75-9c8b-4210-8a47-84798ec0cc0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"b6348d81-aee8-40f2-9bc8-e4ed2afb82b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"411a0113-2267-4d37-89ad-f5aae8613464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig"}}
	{"specversion":"1.0","id":"99698108-9510-44c8-ac38-53ddc70d4eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube"}}
	{"specversion":"1.0","id":"e6cda4e5-36ad-4b47-9e54-6a9799e0ad50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3672cf17-04a3-4472-bfdf-3436d02c1828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93c5525e-ee37-468f-94d1-6cef757442e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-221084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-221084
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-071336 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-071336 --network=: (33.14622495s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-071336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-071336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-071336: (2.024106212s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-684098 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-684098 --network=bridge: (31.416815241s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-684098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-684098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-684098: (1.951198464s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.39s)

                                                
                                    
x
+
TestKicExistingNetwork (35.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-174474 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-174474 --network=existing-network: (33.453074951s)
helpers_test.go:175: Cleaning up "existing-network-174474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-174474
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-174474: (2.034694022s)
--- PASS: TestKicExistingNetwork (35.65s)

                                                
                                    
x
+
TestKicCustomSubnet (36.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-953228 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-953228 --subnet=192.168.60.0/24: (33.941704732s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-953228 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-953228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-953228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-953228: (2.157707189s)
--- PASS: TestKicCustomSubnet (36.12s)

                                                
                                    
x
+
TestKicStaticIP (32.45s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-226363 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-226363 --static-ip=192.168.200.200: (30.153551046s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-226363 ip
E0806 07:34:32.074834  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "static-ip-226363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-226363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-226363: (2.136209951s)
--- PASS: TestKicStaticIP (32.45s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-690206 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-690206 --driver=docker  --container-runtime=docker: (33.6253502s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-692708 --driver=docker  --container-runtime=docker
E0806 07:35:33.029401  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-692708 --driver=docker  --container-runtime=docker: (35.706666282s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-690206
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-692708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-692708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-692708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-692708: (2.205054329s)
helpers_test.go:175: Cleaning up "first-690206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-690206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-690206: (2.129851258s)
--- PASS: TestMinikubeProfile (74.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-091938 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-091938 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.122077175s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-091938 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-104532 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-104532 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.324965451s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-104532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-091938 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-091938 --alsologtostderr -v=5: (1.481015478s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-104532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-104532
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-104532: (1.227858526s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-104532
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-104532: (7.541997564s)
--- PASS: TestMountStart/serial/RestartStopped (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-104532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-066831 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0806 07:36:56.073789  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-066831 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m33.510315408s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-066831 -- rollout status deployment/busybox: (2.216768454s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-g2pzb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-pmt6b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-g2pzb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-pmt6b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-g2pzb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-pmt6b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-g2pzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-g2pzb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-pmt6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-066831 -- exec busybox-fc5497c4f-pmt6b -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-066831 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-066831 -v 3 --alsologtostderr: (19.796983917s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-066831 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp testdata/cp-test.txt multinode-066831:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile227019304/001/cp-test_multinode-066831.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831:/home/docker/cp-test.txt multinode-066831-m02:/home/docker/cp-test_multinode-066831_multinode-066831-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test_multinode-066831_multinode-066831-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831:/home/docker/cp-test.txt multinode-066831-m03:/home/docker/cp-test_multinode-066831_multinode-066831-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test_multinode-066831_multinode-066831-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp testdata/cp-test.txt multinode-066831-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile227019304/001/cp-test_multinode-066831-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m02:/home/docker/cp-test.txt multinode-066831:/home/docker/cp-test_multinode-066831-m02_multinode-066831.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test_multinode-066831-m02_multinode-066831.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m02:/home/docker/cp-test.txt multinode-066831-m03:/home/docker/cp-test_multinode-066831-m02_multinode-066831-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test_multinode-066831-m02_multinode-066831-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp testdata/cp-test.txt multinode-066831-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile227019304/001/cp-test_multinode-066831-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m03:/home/docker/cp-test.txt multinode-066831:/home/docker/cp-test_multinode-066831-m03_multinode-066831.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831 "sudo cat /home/docker/cp-test_multinode-066831-m03_multinode-066831.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 cp multinode-066831-m03:/home/docker/cp-test.txt multinode-066831-m02:/home/docker/cp-test_multinode-066831-m03_multinode-066831-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 ssh -n multinode-066831-m02 "sudo cat /home/docker/cp-test_multinode-066831-m03_multinode-066831-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-066831 node stop m03: (1.205608235s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-066831 status: exit status 7 (494.613053ms)

                                                
                                                
-- stdout --
	multinode-066831
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-066831-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-066831-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr: exit status 7 (522.929156ms)

                                                
                                                
-- stdout --
	multinode-066831
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-066831-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-066831-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:39:07.342130 1052870 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:39:07.342267 1052870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:39:07.342293 1052870 out.go:304] Setting ErrFile to fd 2...
	I0806 07:39:07.342314 1052870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:39:07.342580 1052870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:39:07.342819 1052870 out.go:298] Setting JSON to false
	I0806 07:39:07.342876 1052870 mustload.go:65] Loading cluster: multinode-066831
	I0806 07:39:07.342998 1052870 notify.go:220] Checking for updates...
	I0806 07:39:07.343428 1052870 config.go:182] Loaded profile config "multinode-066831": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:39:07.343499 1052870 status.go:255] checking status of multinode-066831 ...
	I0806 07:39:07.344038 1052870 cli_runner.go:164] Run: docker container inspect multinode-066831 --format={{.State.Status}}
	I0806 07:39:07.369051 1052870 status.go:330] multinode-066831 host status = "Running" (err=<nil>)
	I0806 07:39:07.369087 1052870 host.go:66] Checking if "multinode-066831" exists ...
	I0806 07:39:07.369431 1052870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-066831
	I0806 07:39:07.401011 1052870 host.go:66] Checking if "multinode-066831" exists ...
	I0806 07:39:07.401385 1052870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:39:07.401442 1052870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-066831
	I0806 07:39:07.420622 1052870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33698 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/multinode-066831/id_rsa Username:docker}
	I0806 07:39:07.512544 1052870 ssh_runner.go:195] Run: systemctl --version
	I0806 07:39:07.517022 1052870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:39:07.529639 1052870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0806 07:39:07.596865 1052870 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-06 07:39:07.587654557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0806 07:39:07.597424 1052870 kubeconfig.go:125] found "multinode-066831" server: "https://192.168.58.2:8443"
	I0806 07:39:07.597447 1052870 api_server.go:166] Checking apiserver status ...
	I0806 07:39:07.597486 1052870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 07:39:07.609324 1052870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2280/cgroup
	I0806 07:39:07.618704 1052870 api_server.go:182] apiserver freezer: "3:freezer:/docker/7ae8afb970ee5408ad99381dacfc8f50e46462d7c8113a862a9e5e54ff5ca9c1/kubepods/burstable/pod39ac86f6ed42a3a5231f0759caa36996/14b1248e504a548d8461bb11f0ccd331005faf9e281cadd0d8f7616b70caeb5b"
	I0806 07:39:07.618775 1052870 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7ae8afb970ee5408ad99381dacfc8f50e46462d7c8113a862a9e5e54ff5ca9c1/kubepods/burstable/pod39ac86f6ed42a3a5231f0759caa36996/14b1248e504a548d8461bb11f0ccd331005faf9e281cadd0d8f7616b70caeb5b/freezer.state
	I0806 07:39:07.627222 1052870 api_server.go:204] freezer state: "THAWED"
	I0806 07:39:07.627251 1052870 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0806 07:39:07.634921 1052870 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0806 07:39:07.634949 1052870 status.go:422] multinode-066831 apiserver status = Running (err=<nil>)
	I0806 07:39:07.634961 1052870 status.go:257] multinode-066831 status: &{Name:multinode-066831 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:39:07.634978 1052870 status.go:255] checking status of multinode-066831-m02 ...
	I0806 07:39:07.635278 1052870 cli_runner.go:164] Run: docker container inspect multinode-066831-m02 --format={{.State.Status}}
	I0806 07:39:07.651385 1052870 status.go:330] multinode-066831-m02 host status = "Running" (err=<nil>)
	I0806 07:39:07.651417 1052870 host.go:66] Checking if "multinode-066831-m02" exists ...
	I0806 07:39:07.651783 1052870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-066831-m02
	I0806 07:39:07.670752 1052870 host.go:66] Checking if "multinode-066831-m02" exists ...
	I0806 07:39:07.671051 1052870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 07:39:07.671088 1052870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-066831-m02
	I0806 07:39:07.689292 1052870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33703 SSHKeyPath:/home/jenkins/minikube-integration/19370-879111/.minikube/machines/multinode-066831-m02/id_rsa Username:docker}
	I0806 07:39:07.781120 1052870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 07:39:07.793831 1052870 status.go:257] multinode-066831-m02 status: &{Name:multinode-066831-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:39:07.793868 1052870 status.go:255] checking status of multinode-066831-m03 ...
	I0806 07:39:07.794197 1052870 cli_runner.go:164] Run: docker container inspect multinode-066831-m03 --format={{.State.Status}}
	I0806 07:39:07.811978 1052870 status.go:330] multinode-066831-m03 host status = "Stopped" (err=<nil>)
	I0806 07:39:07.812001 1052870 status.go:343] host is not running, skipping remaining checks
	I0806 07:39:07.812009 1052870 status.go:257] multinode-066831-m03 status: &{Name:multinode-066831-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-066831 node start m03 -v=7 --alsologtostderr: (10.64608232s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-066831
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-066831
E0806 07:39:32.074482  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-066831: (22.628965075s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-066831 --wait=true -v=8 --alsologtostderr
E0806 07:40:33.028799  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-066831 --wait=true -v=8 --alsologtostderr: (1m6.775714981s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-066831
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-066831 node delete m03: (4.774869403s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-066831 stop: (21.637624121s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-066831 status: exit status 7 (80.532478ms)

                                                
                                                
-- stdout --
	multinode-066831
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-066831-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr: exit status 7 (86.90288ms)

                                                
                                                
-- stdout --
	multinode-066831
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-066831-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 07:41:15.957682 1065980 out.go:291] Setting OutFile to fd 1 ...
	I0806 07:41:15.957882 1065980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:41:15.957909 1065980 out.go:304] Setting ErrFile to fd 2...
	I0806 07:41:15.957930 1065980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 07:41:15.958215 1065980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-879111/.minikube/bin
	I0806 07:41:15.958448 1065980 out.go:298] Setting JSON to false
	I0806 07:41:15.958511 1065980 mustload.go:65] Loading cluster: multinode-066831
	I0806 07:41:15.958630 1065980 notify.go:220] Checking for updates...
	I0806 07:41:15.959024 1065980 config.go:182] Loaded profile config "multinode-066831": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 07:41:15.959073 1065980 status.go:255] checking status of multinode-066831 ...
	I0806 07:41:15.959622 1065980 cli_runner.go:164] Run: docker container inspect multinode-066831 --format={{.State.Status}}
	I0806 07:41:15.978430 1065980 status.go:330] multinode-066831 host status = "Stopped" (err=<nil>)
	I0806 07:41:15.978451 1065980 status.go:343] host is not running, skipping remaining checks
	I0806 07:41:15.978459 1065980 status.go:257] multinode-066831 status: &{Name:multinode-066831 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 07:41:15.978500 1065980 status.go:255] checking status of multinode-066831-m02 ...
	I0806 07:41:15.978803 1065980 cli_runner.go:164] Run: docker container inspect multinode-066831-m02 --format={{.State.Status}}
	I0806 07:41:15.999382 1065980 status.go:330] multinode-066831-m02 host status = "Stopped" (err=<nil>)
	I0806 07:41:15.999404 1065980 status.go:343] host is not running, skipping remaining checks
	I0806 07:41:15.999411 1065980 status.go:257] multinode-066831-m02 status: &{Name:multinode-066831-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-066831 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-066831 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.715327864s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-066831 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-066831
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-066831-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-066831-m02 --driver=docker  --container-runtime=docker: exit status 14 (77.10188ms)

                                                
                                                
-- stdout --
	* [multinode-066831-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-066831-m02' is duplicated with machine name 'multinode-066831-m02' in profile 'multinode-066831'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-066831-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-066831-m03 --driver=docker  --container-runtime=docker: (36.25438371s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-066831
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-066831: exit status 80 (325.942873ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-066831 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-066831-m03 already exists in multinode-066831-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-066831-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-066831-m03: (2.151114512s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.86s)

                                                
                                    
x
+
TestPreload (139.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-251519 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0806 07:44:32.074663  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-251519 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m38.928049323s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-251519 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-251519 image pull gcr.io/k8s-minikube/busybox: (1.31993115s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-251519
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-251519: (10.872301119s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-251519 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-251519 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (25.642953296s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-251519 image list
helpers_test.go:175: Cleaning up "test-preload-251519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-251519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-251519: (2.367244376s)
--- PASS: TestPreload (139.43s)

                                                
                                    
x
+
TestScheduledStopUnix (106.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-494684 --memory=2048 --driver=docker  --container-runtime=docker
E0806 07:45:33.029445  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-494684 --memory=2048 --driver=docker  --container-runtime=docker: (33.298564359s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494684 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-494684 -n scheduled-stop-494684
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494684 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494684 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494684 -n scheduled-stop-494684
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-494684
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494684 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-494684
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-494684: exit status 7 (64.445375ms)

                                                
                                                
-- stdout --
	scheduled-stop-494684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494684 -n scheduled-stop-494684
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494684 -n scheduled-stop-494684: exit status 7 (64.299335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-494684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-494684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-494684: (1.806193611s)
--- PASS: TestScheduledStopUnix (106.60s)

                                                
                                    
x
+
TestSkaffold (117.29s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3071033416 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-515239 --memory=2600 --driver=docker  --container-runtime=docker
E0806 07:47:35.121524  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-515239 --memory=2600 --driver=docker  --container-runtime=docker: (34.073885565s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3071033416 run --minikube-profile skaffold-515239 --kube-context skaffold-515239 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3071033416 run --minikube-profile skaffold-515239 --kube-context skaffold-515239 --status-check=true --port-forward=false --interactive=false: (1m7.503698831s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5659fd8557-d8kr6" [5c65ec4d-dba1-4338-b194-610fa6061d37] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004240562s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-99bb5946c-pwqgv" [ef15b739-e98b-4277-8fa7-b6d1d7a7fde3] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003560918s
helpers_test.go:175: Cleaning up "skaffold-515239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-515239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-515239: (3.014738063s)
--- PASS: TestSkaffold (117.29s)

                                                
                                    
x
+
TestInsufficientStorage (11.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-630305 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-630305 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.993209724s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87917d76-511a-4d04-9906-06e6ed756673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-630305] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"622cfaae-cc8c-4c71-9b1c-a886439da7bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"24ba7eaf-a232-48cb-a0fe-f94a599e9ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92a3f0ea-f958-457e-92ea-bc0d2c957567","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig"}}
	{"specversion":"1.0","id":"9a86e7b4-1553-4b6e-a7bc-dd78c33e7ba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube"}}
	{"specversion":"1.0","id":"1df60926-7dcf-4a68-9f93-16c42d6b43e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd2e36fc-beb4-41cc-aa25-df0acf27d12d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82121a62-6bf0-4964-949d-3383b73ba56f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f2250d50-b204-4f6f-8575-8984a0102392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7a3a5df7-6602-491f-861a-adbdfc354436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb671615-d665-4237-8a52-49da00147349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c99e74f2-ecb8-4e42-b1d9-6650edb103dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-630305\" primary control-plane node in \"insufficient-storage-630305\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b07d907-d6f4-44d8-968b-bc97567266fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfb82be9-9b32-46c1-85cf-617540aeac8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4de4b448-a34a-4dbb-93c6-3eed3b22afed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-630305 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-630305 --output=json --layout=cluster: exit status 7 (287.404494ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-630305","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-630305","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 07:49:06.871205 1100122 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-630305" does not appear in /home/jenkins/minikube-integration/19370-879111/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-630305 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-630305 --output=json --layout=cluster: exit status 7 (302.699299ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-630305","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-630305","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 07:49:07.172918 1100184 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-630305" does not appear in /home/jenkins/minikube-integration/19370-879111/kubeconfig
	E0806 07:49:07.183637 1100184 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/insufficient-storage-630305/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-630305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-630305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-630305: (1.708904522s)
--- PASS: TestInsufficientStorage (11.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (105.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2141449760 start -p running-upgrade-292442 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2141449760 start -p running-upgrade-292442 --memory=2200 --vm-driver=docker  --container-runtime=docker: (44.274276198s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-292442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-292442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.988668659s)
helpers_test.go:175: Cleaning up "running-upgrade-292442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-292442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-292442: (3.097469048s)
--- PASS: TestRunningBinaryUpgrade (105.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (157.52s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3890656905 start -p missing-upgrade-779445 --memory=2200 --driver=docker  --container-runtime=docker
E0806 07:56:27.410936  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3890656905 start -p missing-upgrade-779445 --memory=2200 --driver=docker  --container-runtime=docker: (1m20.618913162s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-779445
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-779445: (10.331333227s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-779445
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-779445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0806 07:58:43.565338  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-779445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.251113674s)
helpers_test.go:175: Cleaning up "missing-upgrade-779445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-779445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-779445: (2.656900542s)
--- PASS: TestMissingContainerUpgrade (157.52s)

                                                
                                    
x
+
TestPause/serial/Start (99.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-757689 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-757689 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m39.236700554s)
--- PASS: TestPause/serial/Start (99.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (86.719323ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-904537] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-879111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-879111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-904537 --driver=docker  --container-runtime=docker
E0806 07:50:33.028997  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-904537 --driver=docker  --container-runtime=docker: (31.103960899s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-904537 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --driver=docker  --container-runtime=docker: (14.626455678s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-904537 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-904537 status -o json: exit status 2 (312.172835ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-904537","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-904537
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-904537: (1.750720782s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-757689 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-757689 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.438424462s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-904537 --no-kubernetes --driver=docker  --container-runtime=docker: (7.365320248s)
--- PASS: TestNoKubernetes/serial/Start (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-904537 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-904537 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.838253ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-904537
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-904537: (1.327828104s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-904537 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-904537 --driver=docker  --container-runtime=docker: (8.58317881s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-904537 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-904537 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.507235ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-757689 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-757689 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-757689 --output=json --layout=cluster: exit status 2 (382.724471ms)

                                                
                                                
-- stdout --
	{"Name":"pause-757689","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-757689","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-757689 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-757689 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-757689 --alsologtostderr -v=5: (1.062252595s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-757689 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-757689 --alsologtostderr -v=5: (2.329855294s)
--- PASS: TestPause/serial/DeletePaused (2.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-757689
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-757689: exit status 1 (28.141298ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-757689: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2581101811 start -p stopped-upgrade-020433 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0806 07:59:32.075278  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2581101811 start -p stopped-upgrade-020433 --memory=2200 --vm-driver=docker  --container-runtime=docker: (59.40917408s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2581101811 -p stopped-upgrade-020433 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2581101811 -p stopped-upgrade-020433 stop: (2.430594847s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-020433 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0806 08:00:33.029383  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-020433 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.123533053s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m43.144813796s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-020433
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-020433: (1.711381128s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m24.114301012s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t4qsz" [083c2cf1-cdeb-4129-bd8d-c684f42be706] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t4qsz" [083c2cf1-cdeb-4129-bd8d-c684f42be706] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.003173847s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q9qz7" [9b348e8b-3d7e-4c90-af21-3b44a169a695] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00388757s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kfgw2" [c223a34b-a7e5-4d0f-812a-ecbe848295ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kfgw2" [c223a34b-a7e5-4d0f-812a-ecbe848295ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004637606s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m24.303653753s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0806 08:03:43.565228  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 08:04:15.122353  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m10.189318861s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f5zhn" [6bba4e62-d6cb-466f-8e80-a1b05d9fe5e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005465683s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bjblf" [246c7447-fc86-469c-824f-102a33f2a703] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0806 08:04:32.074968  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bjblf" [246c7447-fc86-469c-824f-102a33f2a703] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003589785s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5gzs6" [b7325f97-006b-485a-85d0-551ab47a5662] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5gzs6" [b7325f97-006b-485a-85d0-551ab47a5662] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003768807s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (70.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m10.571976286s)
--- PASS: TestNetworkPlugins/group/false/Start (70.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0806 08:05:33.029360  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (55.777555933s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m2n7l" [abb1ca17-a04f-442d-bfd0-9d677f300c13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m2n7l" [abb1ca17-a04f-442d-bfd0-9d677f300c13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004077133s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tp5bm" [9fd2e44b-196a-41b5-988f-8fb09329e1bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tp5bm" [9fd2e44b-196a-41b5-988f-8fb09329e1bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004527053s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m13.297211899s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0806 08:07:27.140363  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.145580  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.155771  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.175986  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.216213  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.296447  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.456751  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:27.776903  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:28.417914  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:29.699031  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:32.259246  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:35.805427  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:35.810682  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:35.820933  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:35.841245  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:35.881959  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:35.962368  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:36.122576  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:36.442918  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:37.083103  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:37.379720  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:07:38.363672  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:40.924447  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:46.045312  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:07:47.620469  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m1.018022974s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l9t6f" [29a30322-752e-4a2b-a5c8-e764963f3ea1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0806 08:07:56.285603  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-l9t6f" [29a30322-752e-4a2b-a5c8-e764963f3ea1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009380067s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n7rpg" [d101095a-7d2c-4695-a473-1d43e7bac522] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003655848s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fm9vm" [23020c0d-9ab8-40b4-973b-ab6d52f7d334] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fm9vm" [23020c0d-9ab8-40b4-973b-ab6d52f7d334] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004160377s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (94.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-845570 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m34.135337001s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (94.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-053290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0806 08:08:43.565498  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 08:08:49.062022  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:08:57.726779  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:09:28.280177  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.285428  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.295669  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.315941  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.356187  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.436510  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.596837  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:28.732157  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.737547  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.747781  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.768105  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.808482  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.888784  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:28.917331  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:29.048924  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:29.369349  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:29.557863  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:30.012737  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:30.838068  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:31.292940  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:32.074665  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 08:09:33.399124  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:33.854034  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:38.520222  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:38.974692  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:09:48.761036  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:09:49.215400  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-053290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m52.951812823s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-845570 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-845570 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xw4tl" [60cc7130-1ca4-4dac-b6b6-12839c3b6f6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0806 08:10:06.612274  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xw4tl" [60cc7130-1ca4-4dac-b6b6-12839c3b6f6f] Running
E0806 08:10:09.241987  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:10:09.695625  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:10:10.982792  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004037073s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-845570 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-845570 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)
E0806 08:21:40.218581  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:50.458887  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:22:03.775503  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:22:10.939527  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-296432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0806 08:10:50.202205  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:10:50.655846  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:11:11.522632  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.527930  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.538155  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.558404  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.598650  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.678939  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:11.839353  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:12.160205  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:12.800939  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:14.081206  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:16.641345  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:17.535234  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.540549  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.550838  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.571151  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.611534  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.691870  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:17.852248  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:18.172584  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:18.813714  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:20.094829  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:21.761544  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:11:22.655815  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:11:27.777017  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-296432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (54.461538684s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-296432 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d6b1804-8aaf-4936-8a45-1ca782219382] Pending
helpers_test.go:344: "busybox" [2d6b1804-8aaf-4936-8a45-1ca782219382] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0806 08:11:32.003768  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2d6b1804-8aaf-4936-8a45-1ca782219382] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003398986s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-296432 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-053290 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [58c19366-021a-45b0-9d69-1f99e4d751dd] Pending
helpers_test.go:344: "busybox" [58c19366-021a-45b0-9d69-1f99e4d751dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0806 08:11:38.017213  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
helpers_test.go:344: "busybox" [58c19366-021a-45b0-9d69-1f99e4d751dd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00530682s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-053290 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-296432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-296432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036619643s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-296432 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-296432 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-296432 --alsologtostderr -v=3: (11.034304607s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-053290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-053290 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-053290 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-053290 --alsologtostderr -v=3: (11.031871517s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-296432 -n no-preload-296432
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-296432 -n no-preload-296432: exit status 7 (65.624903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-296432 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (272.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-296432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0806 08:11:52.483928  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-296432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (4m31.658523411s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-296432 -n no-preload-296432
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (272.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-053290 -n old-k8s-version-053290
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-053290 -n old-k8s-version-053290: exit status 7 (104.233035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-053290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-053290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0806 08:11:58.498367  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:12:12.123071  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:12:12.576373  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:12:27.139769  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:12:33.444184  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:12:35.805303  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:12:39.458576  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:12:54.823646  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
E0806 08:12:55.270835  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.276073  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.286334  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.306676  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.346971  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.427290  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.587719  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:55.908274  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:56.549138  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:57.830201  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:12:58.797889  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:58.803186  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:58.813431  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:58.833663  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:58.873921  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:58.954235  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:59.114579  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:12:59.435322  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:00.076121  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:00.390419  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:13:01.356364  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:03.488665  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
E0806 08:13:03.916871  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:05.510973  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:13:09.037527  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:15.751533  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:13:19.278495  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:36.232212  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:13:39.759335  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:13:43.564814  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 08:13:55.364892  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:14:01.379574  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-053290 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.904340463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-053290 -n old-k8s-version-053290
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kpqm5" [0d28d3f8-8e18-432a-802c-0ef74afd584c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008389s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kpqm5" [0d28d3f8-8e18-432a-802c-0ef74afd584c] Running
E0806 08:14:17.192619  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:14:20.720317  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004088996s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-053290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-053290 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-053290 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-053290 -n old-k8s-version-053290
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-053290 -n old-k8s-version-053290: exit status 2 (358.560929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-053290 -n old-k8s-version-053290
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-053290 -n old-k8s-version-053290: exit status 2 (321.032765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-053290 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-053290 -n old-k8s-version-053290
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-053290 -n old-k8s-version-053290
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-519807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0806 08:14:28.280003  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:14:28.732521  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:14:32.075374  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 08:14:55.963414  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:14:56.417437  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:15:03.988232  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:03.993873  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.004062  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.024334  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.064643  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.144989  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.306175  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:04.626294  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:05.266943  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:06.547730  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:09.108041  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:15:14.228180  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-519807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (53.024134324s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519807 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b0b98ef9-f6e0-4b89-8246-973e172d011d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b0b98ef9-f6e0-4b89-8246-973e172d011d] Running
E0806 08:15:24.469044  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003466158s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-519807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-519807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.081087126s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-519807 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-519807 --alsologtostderr -v=3
E0806 08:15:33.029235  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 08:15:39.113726  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-519807 --alsologtostderr -v=3: (10.999884893s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519807 -n embed-certs-519807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519807 -n embed-certs-519807: exit status 7 (70.991584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-519807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-519807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0806 08:15:42.640586  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:15:44.950005  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:16:11.523238  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
E0806 08:16:17.535585  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-519807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (4m26.261593096s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-519807 -n embed-certs-519807
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4r4ts" [6bda1b76-9003-467d-bebf-40b449645d36] Running
E0806 08:16:25.910743  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003796471s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4r4ts" [6bda1b76-9003-467d-bebf-40b449645d36] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004260698s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-296432 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-296432 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-296432 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-296432 -n no-preload-296432
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-296432 -n no-preload-296432: exit status 2 (315.212912ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-296432 -n no-preload-296432
E0806 08:16:36.092395  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.097690  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.107971  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.128320  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.168629  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.248905  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-296432 -n no-preload-296432: exit status 2 (342.048761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-296432 --alsologtostderr -v=1
E0806 08:16:36.409514  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:36.730251  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-296432 -n no-preload-296432
E0806 08:16:37.370838  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-296432 -n no-preload-296432
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-012085 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0806 08:16:41.212385  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:45.219836  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:16:46.332815  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:16:56.572987  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:17:17.053949  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:17:27.140184  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-012085 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (51.876668985s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-012085 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d209bd46-5cdb-4177-b972-81380f05277b] Pending
helpers_test.go:344: "busybox" [d209bd46-5cdb-4177-b972-81380f05277b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d209bd46-5cdb-4177-b972-81380f05277b] Running
E0806 08:17:35.806164  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kindnet-845570/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00380319s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-012085 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-012085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-012085 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-012085 --alsologtostderr -v=3
E0806 08:17:47.832497  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-012085 --alsologtostderr -v=3: (10.969845308s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085: exit status 7 (71.886757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-012085 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-012085 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0806 08:17:55.270974  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:17:58.014171  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:17:58.797813  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:18:22.953949  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/bridge-845570/client.crt: no such file or directory
E0806 08:18:26.481276  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/flannel-845570/client.crt: no such file or directory
E0806 08:18:43.564805  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/skaffold-515239/client.crt: no such file or directory
E0806 08:19:19.934944  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/old-k8s-version-053290/client.crt: no such file or directory
E0806 08:19:28.280094  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/calico-845570/client.crt: no such file or directory
E0806 08:19:28.732420  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/custom-flannel-845570/client.crt: no such file or directory
E0806 08:19:32.075400  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
E0806 08:20:03.988659  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-012085 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (4m26.552710327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hrwzv" [b3bcc064-69dc-441f-a712-94a9158608e3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00370898s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hrwzv" [b3bcc064-69dc-441f-a712-94a9158608e3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004208156s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-519807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-519807 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-519807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519807 -n embed-certs-519807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519807 -n embed-certs-519807: exit status 2 (319.449405ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519807 -n embed-certs-519807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519807 -n embed-certs-519807: exit status 2 (332.37274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-519807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-519807 -n embed-certs-519807
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-519807 -n embed-certs-519807
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-024904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0806 08:20:31.673352  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/kubenet-845570/client.crt: no such file or directory
E0806 08:20:33.028816  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/functional-674935/client.crt: no such file or directory
E0806 08:20:55.122563  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/addons-657623/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-024904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (37.741361215s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-024904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-024904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.268482641s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-024904 --alsologtostderr -v=3
E0806 08:21:11.522604  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/enable-default-cni-845570/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-024904 --alsologtostderr -v=3: (10.96518287s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-024904 -n newest-cni-024904
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-024904 -n newest-cni-024904: exit status 7 (69.004419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-024904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-024904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0806 08:21:17.535399  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/false-845570/client.crt: no such file or directory
E0806 08:21:29.973658  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:29.978898  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:29.989137  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:30.009417  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:30.049678  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:30.129950  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
E0806 08:21:30.290829  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-024904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (17.105161173s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-024904 -n newest-cni-024904
E0806 08:21:30.615553  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-024904 image list --format=json
E0806 08:21:31.256035  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-024904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-024904 -n newest-cni-024904
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-024904 -n newest-cni-024904: exit status 2 (338.763514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-024904 -n newest-cni-024904
E0806 08:21:32.536585  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/no-preload-296432/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-024904 -n newest-cni-024904: exit status 2 (341.302589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-024904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-024904 -n newest-cni-024904
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-024904 -n newest-cni-024904
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5qn76" [fca9eb2e-2efa-4fd3-8a3d-b09b135d67ed] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003279609s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5qn76" [fca9eb2e-2efa-4fd3-8a3d-b09b135d67ed] Running
E0806 08:22:27.140260  884495 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/auto-845570/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005918066s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-012085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-012085 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-012085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085: exit status 2 (334.779925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085: exit status 2 (298.798115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-012085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-012085 -n default-k8s-diff-port-012085
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                    

Test skip (27/351)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-527696 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-527696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-527696
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-845570 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-845570" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19370-879111/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 06 Aug 2024 07:51:05 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-757689
contexts:
- context:
cluster: pause-757689
extensions:
- extension:
last-update: Tue, 06 Aug 2024 07:51:05 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-757689
name: pause-757689
current-context: pause-757689
kind: Config
preferences: {}
users:
- name: pause-757689
user:
client-certificate: /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/pause-757689/client.crt
client-key: /home/jenkins/minikube-integration/19370-879111/.minikube/profiles/pause-757689/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-845570

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-845570" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845570"

                                                
                                                
----------------------- debugLogs end: cilium-845570 [took: 5.495157983s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-845570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-845570
--- SKIP: TestNetworkPlugins/group/cilium (5.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-060218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-060218
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard