Test Report: KVM_Linux_containerd 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (2/326)

Order failed test Duration
35 TestAddons/parallel/InspektorGadget 2046.55
45 TestAddons/StoppedEnableDisable 0
x
+
TestAddons/parallel/InspektorGadget (2046.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-45fbz" [05144502-e330-4454-b03a-8dbd75533825] Running
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004684231s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-715398
addons_test.go:851: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-715398: signal: killed (33m58.62075334s)
addons_test.go:852: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-715398" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-715398 -n addons-715398
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 logs -n 25: (1.224368187s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | binary-mirror-019201 | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC |                     |
	|         | binary-mirror-019201                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33319                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-019201                                                                     | binary-mirror-019201 | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC | 12 Sep 24 21:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC |                     |
	|         | addons-715398                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC |                     |
	|         | addons-715398                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-715398 --wait=true                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC | 12 Sep 24 21:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:35 UTC | 12 Sep 24 21:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:35 UTC | 12 Sep 24 21:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-715398 ssh cat                                                                       | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | /opt/local-path-provisioner/pvc-aff7e032-36f4-43b4-b8ae-1b2682fe1dfa_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-715398 ip                                                                            | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | -p addons-715398                                                                            |                      |         |         |                     |                     |
	| addons  | addons-715398 addons                                                                        | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC |                     |
	|         | addons-715398                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | addons-715398                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | -p addons-715398                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715398 addons                                                                        | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715398 addons                                                                        | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:36 UTC | 12 Sep 24 21:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-715398 ssh curl -s                                                                   | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:37 UTC | 12 Sep 24 21:37 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-715398 ip                                                                            | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:37 UTC | 12 Sep 24 21:37 UTC |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:37 UTC | 12 Sep 24 21:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715398 addons disable                                                                | addons-715398        | jenkins | v1.34.0 | 12 Sep 24 21:37 UTC | 12 Sep 24 21:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:30:24
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:30:24.885463   13962 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:30:24.885730   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:30:24.885739   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:30:24.885743   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:30:24.885917   13962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 21:30:24.886451   13962 out.go:352] Setting JSON to false
	I0912 21:30:24.887213   13962 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":766,"bootTime":1726175859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:30:24.887267   13962 start.go:139] virtualization: kvm guest
	I0912 21:30:24.889441   13962 out.go:177] * [addons-715398] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:30:24.891022   13962 notify.go:220] Checking for updates...
	I0912 21:30:24.891083   13962 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:30:24.892536   13962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:30:24.894030   13962 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 21:30:24.895446   13962 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 21:30:24.896869   13962 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:30:24.898382   13962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:30:24.900090   13962 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:30:24.932877   13962 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:30:24.934439   13962 start.go:297] selected driver: kvm2
	I0912 21:30:24.934463   13962 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:30:24.934477   13962 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:30:24.935510   13962 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:30:24.935587   13962 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:30:24.950061   13962 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:30:24.950100   13962 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:30:24.950291   13962 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:24.950349   13962 cni.go:84] Creating CNI manager for ""
	I0912 21:30:24.950360   13962 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 21:30:24.950369   13962 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:30:24.950426   13962 start.go:340] cluster config:
	{Name:addons-715398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-715398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:24.950527   13962 iso.go:125] acquiring lock: {Name:mkb0c1e04979058aa1830bb4b8c465592b866cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:30:24.952213   13962 out.go:177] * Starting "addons-715398" primary control-plane node in "addons-715398" cluster
	I0912 21:30:24.953319   13962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 21:30:24.953353   13962 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0912 21:30:24.953362   13962 cache.go:56] Caching tarball of preloaded images
	I0912 21:30:24.953425   13962 preload.go:172] Found /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 21:30:24.953434   13962 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0912 21:30:24.953773   13962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/config.json ...
	I0912 21:30:24.953795   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/config.json: {Name:mkdff3e6d5dff2adcf99c556601ad35826a2ffd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:24.953910   13962 start.go:360] acquireMachinesLock for addons-715398: {Name:mkc7e9f4f84ad3c5acd9a6d5045a9fe08aa8e719 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:30:24.953957   13962 start.go:364] duration metric: took 34.027µs to acquireMachinesLock for "addons-715398"
	I0912 21:30:24.953974   13962 start.go:93] Provisioning new machine with config: &{Name:addons-715398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-715398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 21:30:24.954028   13962 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:30:24.955587   13962 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 21:30:24.955686   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:30:24.955725   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:24.969250   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0912 21:30:24.969725   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:24.970233   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:30:24.970253   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:24.970645   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:24.970818   13962 main.go:141] libmachine: (addons-715398) Calling .GetMachineName
	I0912 21:30:24.970971   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:24.971111   13962 start.go:159] libmachine.API.Create for "addons-715398" (driver="kvm2")
	I0912 21:30:24.971140   13962 client.go:168] LocalClient.Create starting
	I0912 21:30:24.971186   13962 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem
	I0912 21:30:25.128748   13962 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/cert.pem
	I0912 21:30:25.251388   13962 main.go:141] libmachine: Running pre-create checks...
	I0912 21:30:25.251408   13962 main.go:141] libmachine: (addons-715398) Calling .PreCreateCheck
	I0912 21:30:25.251900   13962 main.go:141] libmachine: (addons-715398) Calling .GetConfigRaw
	I0912 21:30:25.252315   13962 main.go:141] libmachine: Creating machine...
	I0912 21:30:25.252331   13962 main.go:141] libmachine: (addons-715398) Calling .Create
	I0912 21:30:25.252548   13962 main.go:141] libmachine: (addons-715398) Creating KVM machine...
	I0912 21:30:25.253740   13962 main.go:141] libmachine: (addons-715398) DBG | found existing default KVM network
	I0912 21:30:25.254492   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:25.254336   13984 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0912 21:30:25.254516   13962 main.go:141] libmachine: (addons-715398) DBG | created network xml: 
	I0912 21:30:25.254529   13962 main.go:141] libmachine: (addons-715398) DBG | <network>
	I0912 21:30:25.254542   13962 main.go:141] libmachine: (addons-715398) DBG |   <name>mk-addons-715398</name>
	I0912 21:30:25.254553   13962 main.go:141] libmachine: (addons-715398) DBG |   <dns enable='no'/>
	I0912 21:30:25.254560   13962 main.go:141] libmachine: (addons-715398) DBG |   
	I0912 21:30:25.254570   13962 main.go:141] libmachine: (addons-715398) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:30:25.254580   13962 main.go:141] libmachine: (addons-715398) DBG |     <dhcp>
	I0912 21:30:25.254592   13962 main.go:141] libmachine: (addons-715398) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:30:25.254600   13962 main.go:141] libmachine: (addons-715398) DBG |     </dhcp>
	I0912 21:30:25.254608   13962 main.go:141] libmachine: (addons-715398) DBG |   </ip>
	I0912 21:30:25.254619   13962 main.go:141] libmachine: (addons-715398) DBG |   
	I0912 21:30:25.254676   13962 main.go:141] libmachine: (addons-715398) DBG | </network>
	I0912 21:30:25.254704   13962 main.go:141] libmachine: (addons-715398) DBG | 
	I0912 21:30:25.260211   13962 main.go:141] libmachine: (addons-715398) DBG | trying to create private KVM network mk-addons-715398 192.168.39.0/24...
	I0912 21:30:25.321549   13962 main.go:141] libmachine: (addons-715398) DBG | private KVM network mk-addons-715398 192.168.39.0/24 created
	I0912 21:30:25.321586   13962 main.go:141] libmachine: (addons-715398) Setting up store path in /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398 ...
	I0912 21:30:25.321604   13962 main.go:141] libmachine: (addons-715398) Building disk image from file:///home/jenkins/minikube-integration/19616-5898/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:30:25.321617   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:25.321525   13984 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 21:30:25.321708   13962 main.go:141] libmachine: (addons-715398) Downloading /home/jenkins/minikube-integration/19616-5898/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5898/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:30:25.575862   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:25.575725   13984 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa...
	I0912 21:30:26.038829   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:26.038676   13984 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/addons-715398.rawdisk...
	I0912 21:30:26.038856   13962 main.go:141] libmachine: (addons-715398) DBG | Writing magic tar header
	I0912 21:30:26.038865   13962 main.go:141] libmachine: (addons-715398) DBG | Writing SSH key tar header
	I0912 21:30:26.038873   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:26.038784   13984 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398 ...
	I0912 21:30:26.038884   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398
	I0912 21:30:26.038992   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398 (perms=drwx------)
	I0912 21:30:26.039020   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins/minikube-integration/19616-5898/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:30:26.039028   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5898/.minikube/machines
	I0912 21:30:26.039038   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 21:30:26.039046   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5898
	I0912 21:30:26.039060   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:30:26.039068   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:30:26.039078   13962 main.go:141] libmachine: (addons-715398) DBG | Checking permissions on dir: /home
	I0912 21:30:26.039088   13962 main.go:141] libmachine: (addons-715398) DBG | Skipping /home - not owner
	I0912 21:30:26.039112   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins/minikube-integration/19616-5898/.minikube (perms=drwxr-xr-x)
	I0912 21:30:26.039126   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins/minikube-integration/19616-5898 (perms=drwxrwxr-x)
	I0912 21:30:26.039138   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:30:26.039150   13962 main.go:141] libmachine: (addons-715398) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:30:26.039157   13962 main.go:141] libmachine: (addons-715398) Creating domain...
	I0912 21:30:26.040057   13962 main.go:141] libmachine: (addons-715398) define libvirt domain using xml: 
	I0912 21:30:26.040073   13962 main.go:141] libmachine: (addons-715398) <domain type='kvm'>
	I0912 21:30:26.040081   13962 main.go:141] libmachine: (addons-715398)   <name>addons-715398</name>
	I0912 21:30:26.040086   13962 main.go:141] libmachine: (addons-715398)   <memory unit='MiB'>4000</memory>
	I0912 21:30:26.040092   13962 main.go:141] libmachine: (addons-715398)   <vcpu>2</vcpu>
	I0912 21:30:26.040097   13962 main.go:141] libmachine: (addons-715398)   <features>
	I0912 21:30:26.040103   13962 main.go:141] libmachine: (addons-715398)     <acpi/>
	I0912 21:30:26.040107   13962 main.go:141] libmachine: (addons-715398)     <apic/>
	I0912 21:30:26.040116   13962 main.go:141] libmachine: (addons-715398)     <pae/>
	I0912 21:30:26.040132   13962 main.go:141] libmachine: (addons-715398)     
	I0912 21:30:26.040145   13962 main.go:141] libmachine: (addons-715398)   </features>
	I0912 21:30:26.040150   13962 main.go:141] libmachine: (addons-715398)   <cpu mode='host-passthrough'>
	I0912 21:30:26.040156   13962 main.go:141] libmachine: (addons-715398)   
	I0912 21:30:26.040166   13962 main.go:141] libmachine: (addons-715398)   </cpu>
	I0912 21:30:26.040172   13962 main.go:141] libmachine: (addons-715398)   <os>
	I0912 21:30:26.040177   13962 main.go:141] libmachine: (addons-715398)     <type>hvm</type>
	I0912 21:30:26.040183   13962 main.go:141] libmachine: (addons-715398)     <boot dev='cdrom'/>
	I0912 21:30:26.040195   13962 main.go:141] libmachine: (addons-715398)     <boot dev='hd'/>
	I0912 21:30:26.040208   13962 main.go:141] libmachine: (addons-715398)     <bootmenu enable='no'/>
	I0912 21:30:26.040216   13962 main.go:141] libmachine: (addons-715398)   </os>
	I0912 21:30:26.040229   13962 main.go:141] libmachine: (addons-715398)   <devices>
	I0912 21:30:26.040246   13962 main.go:141] libmachine: (addons-715398)     <disk type='file' device='cdrom'>
	I0912 21:30:26.040264   13962 main.go:141] libmachine: (addons-715398)       <source file='/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/boot2docker.iso'/>
	I0912 21:30:26.040281   13962 main.go:141] libmachine: (addons-715398)       <target dev='hdc' bus='scsi'/>
	I0912 21:30:26.040296   13962 main.go:141] libmachine: (addons-715398)       <readonly/>
	I0912 21:30:26.040306   13962 main.go:141] libmachine: (addons-715398)     </disk>
	I0912 21:30:26.040314   13962 main.go:141] libmachine: (addons-715398)     <disk type='file' device='disk'>
	I0912 21:30:26.040331   13962 main.go:141] libmachine: (addons-715398)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:30:26.040347   13962 main.go:141] libmachine: (addons-715398)       <source file='/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/addons-715398.rawdisk'/>
	I0912 21:30:26.040367   13962 main.go:141] libmachine: (addons-715398)       <target dev='hda' bus='virtio'/>
	I0912 21:30:26.040380   13962 main.go:141] libmachine: (addons-715398)     </disk>
	I0912 21:30:26.040391   13962 main.go:141] libmachine: (addons-715398)     <interface type='network'>
	I0912 21:30:26.040399   13962 main.go:141] libmachine: (addons-715398)       <source network='mk-addons-715398'/>
	I0912 21:30:26.040407   13962 main.go:141] libmachine: (addons-715398)       <model type='virtio'/>
	I0912 21:30:26.040416   13962 main.go:141] libmachine: (addons-715398)     </interface>
	I0912 21:30:26.040428   13962 main.go:141] libmachine: (addons-715398)     <interface type='network'>
	I0912 21:30:26.040441   13962 main.go:141] libmachine: (addons-715398)       <source network='default'/>
	I0912 21:30:26.040459   13962 main.go:141] libmachine: (addons-715398)       <model type='virtio'/>
	I0912 21:30:26.040475   13962 main.go:141] libmachine: (addons-715398)     </interface>
	I0912 21:30:26.040496   13962 main.go:141] libmachine: (addons-715398)     <serial type='pty'>
	I0912 21:30:26.040519   13962 main.go:141] libmachine: (addons-715398)       <target port='0'/>
	I0912 21:30:26.040534   13962 main.go:141] libmachine: (addons-715398)     </serial>
	I0912 21:30:26.040547   13962 main.go:141] libmachine: (addons-715398)     <console type='pty'>
	I0912 21:30:26.040561   13962 main.go:141] libmachine: (addons-715398)       <target type='serial' port='0'/>
	I0912 21:30:26.040573   13962 main.go:141] libmachine: (addons-715398)     </console>
	I0912 21:30:26.040583   13962 main.go:141] libmachine: (addons-715398)     <rng model='virtio'>
	I0912 21:30:26.040596   13962 main.go:141] libmachine: (addons-715398)       <backend model='random'>/dev/random</backend>
	I0912 21:30:26.040609   13962 main.go:141] libmachine: (addons-715398)     </rng>
	I0912 21:30:26.040622   13962 main.go:141] libmachine: (addons-715398)     
	I0912 21:30:26.040633   13962 main.go:141] libmachine: (addons-715398)     
	I0912 21:30:26.040648   13962 main.go:141] libmachine: (addons-715398)   </devices>
	I0912 21:30:26.040793   13962 main.go:141] libmachine: (addons-715398) </domain>
	I0912 21:30:26.040805   13962 main.go:141] libmachine: (addons-715398) 
	I0912 21:30:26.046456   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:86:bd:5c in network default
	I0912 21:30:26.047020   13962 main.go:141] libmachine: (addons-715398) Ensuring networks are active...
	I0912 21:30:26.047037   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:26.047890   13962 main.go:141] libmachine: (addons-715398) Ensuring network default is active
	I0912 21:30:26.048165   13962 main.go:141] libmachine: (addons-715398) Ensuring network mk-addons-715398 is active
	I0912 21:30:26.048685   13962 main.go:141] libmachine: (addons-715398) Getting domain xml...
	I0912 21:30:26.049425   13962 main.go:141] libmachine: (addons-715398) Creating domain...
	I0912 21:30:27.444452   13962 main.go:141] libmachine: (addons-715398) Waiting to get IP...
	I0912 21:30:27.445173   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:27.445641   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:27.445686   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:27.445627   13984 retry.go:31] will retry after 213.421999ms: waiting for machine to come up
	I0912 21:30:27.661080   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:27.661543   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:27.661570   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:27.661487   13984 retry.go:31] will retry after 303.750714ms: waiting for machine to come up
	I0912 21:30:27.967018   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:27.967502   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:27.967530   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:27.967442   13984 retry.go:31] will retry after 422.843051ms: waiting for machine to come up
	I0912 21:30:28.391785   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:28.392234   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:28.392272   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:28.392210   13984 retry.go:31] will retry after 429.809289ms: waiting for machine to come up
	I0912 21:30:28.823903   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:28.824349   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:28.824378   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:28.824286   13984 retry.go:31] will retry after 742.720914ms: waiting for machine to come up
	I0912 21:30:29.568156   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:29.568521   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:29.568550   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:29.568473   13984 retry.go:31] will retry after 816.326645ms: waiting for machine to come up
	I0912 21:30:30.386076   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:30.386479   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:30.386508   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:30.386433   13984 retry.go:31] will retry after 1.028176736s: waiting for machine to come up
	I0912 21:30:31.416731   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:31.417197   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:31.417225   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:31.417155   13984 retry.go:31] will retry after 979.834482ms: waiting for machine to come up
	I0912 21:30:32.398256   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:32.398669   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:32.398698   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:32.398615   13984 retry.go:31] will retry after 1.659659256s: waiting for machine to come up
	I0912 21:30:34.059857   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:34.060318   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:34.060343   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:34.060273   13984 retry.go:31] will retry after 1.412458178s: waiting for machine to come up
	I0912 21:30:35.474894   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:35.475360   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:35.475390   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:35.475325   13984 retry.go:31] will retry after 2.335273602s: waiting for machine to come up
	I0912 21:30:37.813938   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:37.814427   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:37.814455   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:37.814389   13984 retry.go:31] will retry after 2.471705221s: waiting for machine to come up
	I0912 21:30:40.287605   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:40.288014   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:40.288041   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:40.287966   13984 retry.go:31] will retry after 3.438126303s: waiting for machine to come up
	I0912 21:30:43.730585   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:43.731166   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find current IP address of domain addons-715398 in network mk-addons-715398
	I0912 21:30:43.731187   13962 main.go:141] libmachine: (addons-715398) DBG | I0912 21:30:43.731126   13984 retry.go:31] will retry after 4.16712495s: waiting for machine to come up
	I0912 21:30:47.899461   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:47.899874   13962 main.go:141] libmachine: (addons-715398) Found IP for machine: 192.168.39.77
	I0912 21:30:47.899901   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has current primary IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:47.899910   13962 main.go:141] libmachine: (addons-715398) Reserving static IP address...
	I0912 21:30:47.900315   13962 main.go:141] libmachine: (addons-715398) DBG | unable to find host DHCP lease matching {name: "addons-715398", mac: "52:54:00:20:cc:cd", ip: "192.168.39.77"} in network mk-addons-715398
	I0912 21:30:47.967932   13962 main.go:141] libmachine: (addons-715398) DBG | Getting to WaitForSSH function...
	I0912 21:30:47.967977   13962 main.go:141] libmachine: (addons-715398) Reserved static IP address: 192.168.39.77
	I0912 21:30:47.967992   13962 main.go:141] libmachine: (addons-715398) Waiting for SSH to be available...
	I0912 21:30:47.970921   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:47.971426   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:47.971453   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:47.971601   13962 main.go:141] libmachine: (addons-715398) DBG | Using SSH client type: external
	I0912 21:30:47.971622   13962 main.go:141] libmachine: (addons-715398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa (-rw-------)
	I0912 21:30:47.971694   13962 main.go:141] libmachine: (addons-715398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:30:47.971708   13962 main.go:141] libmachine: (addons-715398) DBG | About to run SSH command:
	I0912 21:30:47.971718   13962 main.go:141] libmachine: (addons-715398) DBG | exit 0
	I0912 21:30:48.105773   13962 main.go:141] libmachine: (addons-715398) DBG | SSH cmd err, output: <nil>: 
	I0912 21:30:48.106033   13962 main.go:141] libmachine: (addons-715398) KVM machine creation complete!
	I0912 21:30:48.106398   13962 main.go:141] libmachine: (addons-715398) Calling .GetConfigRaw
	I0912 21:30:48.106957   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:48.107144   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:48.107304   13962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:30:48.107319   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:30:48.108678   13962 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:30:48.108692   13962 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:30:48.108697   13962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:30:48.108724   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.110746   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.111070   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.111096   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.111225   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.111434   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.111586   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.111748   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.111911   13962 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:48.112116   13962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0912 21:30:48.112126   13962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:30:48.217082   13962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:48.217111   13962 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:30:48.217121   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.219645   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.220021   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.220052   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.220311   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.220528   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.220699   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.220840   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.221010   13962 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:48.221212   13962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0912 21:30:48.221224   13962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:30:48.330399   13962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:30:48.330464   13962 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:30:48.330475   13962 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:30:48.330482   13962 main.go:141] libmachine: (addons-715398) Calling .GetMachineName
	I0912 21:30:48.330745   13962 buildroot.go:166] provisioning hostname "addons-715398"
	I0912 21:30:48.330770   13962 main.go:141] libmachine: (addons-715398) Calling .GetMachineName
	I0912 21:30:48.330963   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.333865   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.334200   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.334228   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.334387   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.334550   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.334685   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.334909   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.335140   13962 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:48.335313   13962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0912 21:30:48.335329   13962 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-715398 && echo "addons-715398" | sudo tee /etc/hostname
	I0912 21:30:48.456090   13962 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-715398
	
	I0912 21:30:48.456124   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.458759   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.459116   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.459145   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.459345   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.459502   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.459652   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.460064   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.460287   13962 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:48.460496   13962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0912 21:30:48.460513   13962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-715398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-715398/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-715398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:30:48.574468   13962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:48.574503   13962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5898/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5898/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5898/.minikube}
	I0912 21:30:48.574576   13962 buildroot.go:174] setting up certificates
	I0912 21:30:48.574592   13962 provision.go:84] configureAuth start
	I0912 21:30:48.574610   13962 main.go:141] libmachine: (addons-715398) Calling .GetMachineName
	I0912 21:30:48.574937   13962 main.go:141] libmachine: (addons-715398) Calling .GetIP
	I0912 21:30:48.577179   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.577513   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.577543   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.577736   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.579948   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.580286   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.580317   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.580428   13962 provision.go:143] copyHostCerts
	I0912 21:30:48.580521   13962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5898/.minikube/ca.pem (1078 bytes)
	I0912 21:30:48.580661   13962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5898/.minikube/cert.pem (1123 bytes)
	I0912 21:30:48.580763   13962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5898/.minikube/key.pem (1675 bytes)
	I0912 21:30:48.580843   13962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5898/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca-key.pem org=jenkins.addons-715398 san=[127.0.0.1 192.168.39.77 addons-715398 localhost minikube]
	I0912 21:30:48.817565   13962 provision.go:177] copyRemoteCerts
	I0912 21:30:48.817617   13962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:30:48.817636   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.820566   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.820888   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.820915   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.821057   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.821326   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.821520   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.821696   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:30:48.903788   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 21:30:48.927079   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:30:48.950266   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:30:48.974086   13962 provision.go:87] duration metric: took 399.477859ms to configureAuth
	I0912 21:30:48.974111   13962 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:30:48.974276   13962 config.go:182] Loaded profile config "addons-715398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 21:30:48.974296   13962 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:30:48.974306   13962 main.go:141] libmachine: (addons-715398) Calling .GetURL
	I0912 21:30:48.975566   13962 main.go:141] libmachine: (addons-715398) DBG | Using libvirt version 6000000
	I0912 21:30:48.977767   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.978131   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.978159   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.978320   13962 main.go:141] libmachine: Docker is up and running!
	I0912 21:30:48.978332   13962 main.go:141] libmachine: Reticulating splines...
	I0912 21:30:48.978338   13962 client.go:171] duration metric: took 24.007189319s to LocalClient.Create
	I0912 21:30:48.978366   13962 start.go:167] duration metric: took 24.007256118s to libmachine.API.Create "addons-715398"
	I0912 21:30:48.978378   13962 start.go:293] postStartSetup for "addons-715398" (driver="kvm2")
	I0912 21:30:48.978393   13962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:30:48.978414   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:48.978621   13962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:30:48.978640   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:48.980740   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.980971   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:48.980992   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:48.981148   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:48.981315   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:48.981473   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:48.981658   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:30:49.064416   13962 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:30:49.068871   13962 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:30:49.068895   13962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5898/.minikube/addons for local assets ...
	I0912 21:30:49.068967   13962 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5898/.minikube/files for local assets ...
	I0912 21:30:49.068989   13962 start.go:296] duration metric: took 90.601833ms for postStartSetup
	I0912 21:30:49.069020   13962 main.go:141] libmachine: (addons-715398) Calling .GetConfigRaw
	I0912 21:30:49.069554   13962 main.go:141] libmachine: (addons-715398) Calling .GetIP
	I0912 21:30:49.072166   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.072489   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:49.072518   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.072737   13962 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/config.json ...
	I0912 21:30:49.072925   13962 start.go:128] duration metric: took 24.118887809s to createHost
	I0912 21:30:49.072947   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:49.075302   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.075666   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:49.075704   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.075862   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:49.076063   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:49.076214   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:49.076383   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:49.076514   13962 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:49.076743   13962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0912 21:30:49.076759   13962 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:30:49.182273   13962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726176649.156316380
	
	I0912 21:30:49.182295   13962 fix.go:216] guest clock: 1726176649.156316380
	I0912 21:30:49.182304   13962 fix.go:229] Guest: 2024-09-12 21:30:49.15631638 +0000 UTC Remote: 2024-09-12 21:30:49.0729376 +0000 UTC m=+24.219907831 (delta=83.37878ms)
	I0912 21:30:49.182344   13962 fix.go:200] guest clock delta is within tolerance: 83.37878ms
	I0912 21:30:49.182352   13962 start.go:83] releasing machines lock for "addons-715398", held for 24.228383533s
	I0912 21:30:49.182380   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:49.182684   13962 main.go:141] libmachine: (addons-715398) Calling .GetIP
	I0912 21:30:49.185398   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.185804   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:49.185834   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.185976   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:49.186579   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:49.186764   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:30:49.186852   13962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:30:49.186903   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:49.186986   13962 ssh_runner.go:195] Run: cat /version.json
	I0912 21:30:49.187027   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:30:49.189394   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.189753   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:49.189778   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.189796   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.189991   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:49.190172   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:49.190230   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:49.190254   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:49.190345   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:49.190488   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:30:49.190551   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:30:49.190661   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:30:49.190804   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:30:49.190968   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:30:49.267375   13962 ssh_runner.go:195] Run: systemctl --version
	I0912 21:30:49.297598   13962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:30:49.303233   13962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:30:49.303289   13962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:30:49.319521   13962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:30:49.319542   13962 start.go:495] detecting cgroup driver to use...
	I0912 21:30:49.319593   13962 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 21:30:49.351924   13962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 21:30:49.365820   13962 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:30:49.365878   13962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:30:49.379172   13962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:30:49.392537   13962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:30:49.504445   13962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:30:49.644322   13962 docker.go:233] disabling docker service ...
	I0912 21:30:49.644409   13962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:30:49.658638   13962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:30:49.671129   13962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:30:49.801307   13962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:30:49.936061   13962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:30:49.949384   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:30:49.967392   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:30:49.977475   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:30:49.987644   13962 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 21:30:49.987697   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 21:30:49.997834   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:30:50.008121   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:30:50.018203   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:30:50.028230   13962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:30:50.038442   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:30:50.048671   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:30:50.058741   13962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:30:50.068844   13962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:30:50.078167   13962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:30:50.078210   13962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:30:50.090643   13962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:30:50.099656   13962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:50.223696   13962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:30:50.253003   13962 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0912 21:30:50.253101   13962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 21:30:50.258272   13962 retry.go:31] will retry after 956.300434ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0912 21:30:51.215431   13962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 21:30:51.221012   13962 start.go:563] Will wait 60s for crictl version
	I0912 21:30:51.221082   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:30:51.225082   13962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:30:51.261468   13962 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.21
	RuntimeApiVersion:  v1
	I0912 21:30:51.261545   13962 ssh_runner.go:195] Run: containerd --version
	I0912 21:30:51.289501   13962 ssh_runner.go:195] Run: containerd --version
	I0912 21:30:51.316843   13962 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.21 ...
	I0912 21:30:51.318498   13962 main.go:141] libmachine: (addons-715398) Calling .GetIP
	I0912 21:30:51.321221   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:51.321575   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:30:51.321597   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:30:51.321797   13962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:30:51.326026   13962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:51.338983   13962 kubeadm.go:883] updating cluster {Name:addons-715398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-715398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:30:51.339071   13962 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 21:30:51.339127   13962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:51.369335   13962 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:30:51.369388   13962 ssh_runner.go:195] Run: which lz4
	I0912 21:30:51.373168   13962 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:30:51.377256   13962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:30:51.377285   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (388155145 bytes)
	I0912 21:30:52.635591   13962 containerd.go:563] duration metric: took 1.262444034s to copy over tarball
	I0912 21:30:52.635682   13962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:30:54.632028   13962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.996320062s)
	I0912 21:30:54.632055   13962 containerd.go:570] duration metric: took 1.996428921s to extract the tarball
	I0912 21:30:54.632066   13962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:30:54.670979   13962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:54.788121   13962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:30:54.813430   13962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:54.845572   13962 retry.go:31] will retry after 226.029888ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-12T21:30:54Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0912 21:30:55.072498   13962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:55.106745   13962 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 21:30:55.106769   13962 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:30:55.106781   13962 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.1 containerd true true} ...
	I0912 21:30:55.106905   13962 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-715398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-715398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:30:55.106981   13962 ssh_runner.go:195] Run: sudo crictl info
	I0912 21:30:55.140839   13962 cni.go:84] Creating CNI manager for ""
	I0912 21:30:55.140865   13962 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 21:30:55.140873   13962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:30:55.140896   13962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-715398 NodeName:addons-715398 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:30:55.141029   13962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-715398"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:30:55.141086   13962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:30:55.151137   13962 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:30:55.151194   13962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:30:55.160858   13962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0912 21:30:55.177191   13962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:30:55.193693   13962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0912 21:30:55.210048   13962 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0912 21:30:55.213995   13962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:55.226294   13962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:55.337388   13962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:55.357758   13962 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398 for IP: 192.168.39.77
	I0912 21:30:55.357779   13962 certs.go:194] generating shared ca certs ...
	I0912 21:30:55.357794   13962 certs.go:226] acquiring lock for ca certs: {Name:mk321e73330fb75f7e8d075007a3889e35912e95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.357947   13962 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5898/.minikube/ca.key
	I0912 21:30:55.417418   13962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5898/.minikube/ca.crt ...
	I0912 21:30:55.417445   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/ca.crt: {Name:mk6abfbe77cd5e6f4990d7d5a42cc3510628bf51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.417613   13962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5898/.minikube/ca.key ...
	I0912 21:30:55.417626   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/ca.key: {Name:mked3cff555cfbab045bb6b23fe98e80d8ce6ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.417758   13962 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.key
	I0912 21:30:55.534473   13962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.crt ...
	I0912 21:30:55.534500   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.crt: {Name:mk3891f657cf1cfa0ab8b2576040f13ad6e1b249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.534676   13962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.key ...
	I0912 21:30:55.534689   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.key: {Name:mkcc247fbc0b2cb3548dc762648843d6b81b879c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.534791   13962 certs.go:256] generating profile certs ...
	I0912 21:30:55.534850   13962 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.key
	I0912 21:30:55.534864   13962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt with IP's: []
	I0912 21:30:55.792498   13962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt ...
	I0912 21:30:55.792526   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: {Name:mkf7fc315e17b52f064292f5608092de8d052dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.793149   13962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.key ...
	I0912 21:30:55.793165   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.key: {Name:mkb3d74e8c81f6fa40b83cf93d4e1d4709904112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.793259   13962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key.3a87bc53
	I0912 21:30:55.793278   13962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt.3a87bc53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77]
	I0912 21:30:55.870363   13962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt.3a87bc53 ...
	I0912 21:30:55.870401   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt.3a87bc53: {Name:mkb9f6a94031d2233aa4870febeee953fa38fbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.870591   13962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key.3a87bc53 ...
	I0912 21:30:55.870609   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key.3a87bc53: {Name:mk25956b394270362d7455eae50857ca13c84790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.870714   13962 certs.go:381] copying /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt.3a87bc53 -> /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt
	I0912 21:30:55.870845   13962 certs.go:385] copying /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key.3a87bc53 -> /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key
	I0912 21:30:55.870918   13962 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.key
	I0912 21:30:55.870941   13962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.crt with IP's: []
	I0912 21:30:55.945040   13962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.crt ...
	I0912 21:30:55.945067   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.crt: {Name:mke78a61cf1d8c6bd46c052fa49060510cbd3129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.945231   13962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.key ...
	I0912 21:30:55.945247   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.key: {Name:mkbfcd6cd1f35cb0507ef201f385a88563c44ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:55.945942   13962 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:30:55.945982   13962 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/ca.pem (1078 bytes)
	I0912 21:30:55.946020   13962 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:30:55.946045   13962 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5898/.minikube/certs/key.pem (1675 bytes)
	I0912 21:30:55.946571   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:30:55.972456   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:30:55.996721   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:30:56.023552   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:30:56.048499   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:30:56.082583   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:30:56.110928   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:30:56.134804   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 21:30:56.159487   13962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:30:56.182643   13962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:30:56.199229   13962 ssh_runner.go:195] Run: openssl version
	I0912 21:30:56.205021   13962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:30:56.216124   13962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:56.221990   13962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:56.222055   13962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:56.228065   13962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:30:56.239049   13962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:30:56.243485   13962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:30:56.243538   13962 kubeadm.go:392] StartCluster: {Name:addons-715398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-715398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:56.243611   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0912 21:30:56.243648   13962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:30:56.278787   13962 cri.go:89] found id: ""
	I0912 21:30:56.278865   13962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:30:56.289461   13962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:30:56.299212   13962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:30:56.310364   13962 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:30:56.310381   13962 kubeadm.go:157] found existing configuration files:
	
	I0912 21:30:56.310416   13962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:30:56.320802   13962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:30:56.320855   13962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:30:56.331870   13962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:30:56.342501   13962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:30:56.342546   13962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:30:56.354137   13962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:30:56.364056   13962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:30:56.364111   13962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:30:56.373782   13962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:30:56.382810   13962 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:30:56.382869   13962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:30:56.392765   13962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:30:56.444555   13962 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:30:56.444651   13962 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:30:56.539614   13962 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:30:56.539784   13962 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:30:56.539943   13962 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:30:56.545075   13962 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:30:56.640010   13962 out.go:235]   - Generating certificates and keys ...
	I0912 21:30:56.640146   13962 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:30:56.640238   13962 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:30:56.695030   13962 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:30:56.777659   13962 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:30:56.957694   13962 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:30:57.064949   13962 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:30:57.235783   13962 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:30:57.235937   13962 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-715398 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0912 21:30:57.326907   13962 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:30:57.327107   13962 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-715398 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0912 21:30:57.511332   13962 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:30:57.615557   13962 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:30:57.821066   13962 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:30:57.821214   13962 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:30:58.011572   13962 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:30:58.183093   13962 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:30:58.290090   13962 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:30:58.420827   13962 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:30:58.598445   13962 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:30:58.598888   13962 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:30:58.601486   13962 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:30:58.669081   13962 out.go:235]   - Booting up control plane ...
	I0912 21:30:58.669205   13962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:30:58.669304   13962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:30:58.669390   13962 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:30:58.669514   13962 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:30:58.669630   13962 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:30:58.669704   13962 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:30:58.775810   13962 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:30:58.775994   13962 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:30:59.277266   13962 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.714628ms
	I0912 21:30:59.277383   13962 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:31:04.775255   13962 kubeadm.go:310] [api-check] The API server is healthy after 5.501373054s
	I0912 21:31:04.794681   13962 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:31:04.812750   13962 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:31:04.841775   13962 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:31:04.841989   13962 kubeadm.go:310] [mark-control-plane] Marking the node addons-715398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:31:04.853398   13962 kubeadm.go:310] [bootstrap-token] Using token: fp3v8u.6pyj0mgioj0arv7p
	I0912 21:31:04.854834   13962 out.go:235]   - Configuring RBAC rules ...
	I0912 21:31:04.854940   13962 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:31:04.865273   13962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:31:04.874439   13962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:31:04.878460   13962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:31:04.885146   13962 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:31:04.888579   13962 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:31:05.181792   13962 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:31:05.618404   13962 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:31:06.182088   13962 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:31:06.183002   13962 kubeadm.go:310] 
	I0912 21:31:06.183074   13962 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:31:06.183090   13962 kubeadm.go:310] 
	I0912 21:31:06.183187   13962 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:31:06.183200   13962 kubeadm.go:310] 
	I0912 21:31:06.183224   13962 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:31:06.183326   13962 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:31:06.183416   13962 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:31:06.183442   13962 kubeadm.go:310] 
	I0912 21:31:06.183515   13962 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:31:06.183526   13962 kubeadm.go:310] 
	I0912 21:31:06.183581   13962 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:31:06.183591   13962 kubeadm.go:310] 
	I0912 21:31:06.183655   13962 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:31:06.183759   13962 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:31:06.183869   13962 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:31:06.183886   13962 kubeadm.go:310] 
	I0912 21:31:06.183960   13962 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:31:06.184031   13962 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:31:06.184039   13962 kubeadm.go:310] 
	I0912 21:31:06.184107   13962 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fp3v8u.6pyj0mgioj0arv7p \
	I0912 21:31:06.184200   13962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4bb83f3730173731326b9dc25a9344e0080041a712d4f3f89e468830ba666948 \
	I0912 21:31:06.184227   13962 kubeadm.go:310] 	--control-plane 
	I0912 21:31:06.184235   13962 kubeadm.go:310] 
	I0912 21:31:06.184328   13962 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:31:06.184347   13962 kubeadm.go:310] 
	I0912 21:31:06.184457   13962 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fp3v8u.6pyj0mgioj0arv7p \
	I0912 21:31:06.184589   13962 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4bb83f3730173731326b9dc25a9344e0080041a712d4f3f89e468830ba666948 
	I0912 21:31:06.187559   13962 kubeadm.go:310] W0912 21:30:56.424324     761 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:31:06.187933   13962 kubeadm.go:310] W0912 21:30:56.425317     761 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:31:06.188057   13962 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:31:06.188097   13962 cni.go:84] Creating CNI manager for ""
	I0912 21:31:06.188111   13962 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 21:31:06.190076   13962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:31:06.191397   13962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:31:06.202700   13962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:31:06.225913   13962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:31:06.226038   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-715398 minikube.k8s.io/updated_at=2024_09_12T21_31_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-715398 minikube.k8s.io/primary=true
	I0912 21:31:06.226068   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:06.239971   13962 ops.go:34] apiserver oom_adj: -16
	I0912 21:31:06.369317   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:06.869932   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:07.370016   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:07.869574   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:08.370420   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:08.869927   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:09.369943   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:09.869335   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:10.369885   13962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:31:10.452264   13962 kubeadm.go:1113] duration metric: took 4.226254668s to wait for elevateKubeSystemPrivileges
	I0912 21:31:10.452297   13962 kubeadm.go:394] duration metric: took 14.208763346s to StartCluster
	I0912 21:31:10.452314   13962 settings.go:142] acquiring lock: {Name:mkbd5ef2db08036fb13aace114ac48b8cb002130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:31:10.452437   13962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 21:31:10.452895   13962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/kubeconfig: {Name:mk33427b02642c68cc845daaca488b1927fe63e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:31:10.453122   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:31:10.453151   13962 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 21:31:10.453210   13962 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:31:10.453323   13962 addons.go:69] Setting yakd=true in profile "addons-715398"
	I0912 21:31:10.453329   13962 addons.go:69] Setting inspektor-gadget=true in profile "addons-715398"
	I0912 21:31:10.453351   13962 addons.go:234] Setting addon yakd=true in "addons-715398"
	I0912 21:31:10.453354   13962 addons.go:234] Setting addon inspektor-gadget=true in "addons-715398"
	I0912 21:31:10.453346   13962 addons.go:69] Setting gcp-auth=true in profile "addons-715398"
	I0912 21:31:10.453379   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.453387   13962 mustload.go:65] Loading cluster: addons-715398
	I0912 21:31:10.453395   13962 addons.go:69] Setting volcano=true in profile "addons-715398"
	I0912 21:31:10.453406   13962 config.go:182] Loaded profile config "addons-715398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 21:31:10.453423   13962 addons.go:69] Setting volumesnapshots=true in profile "addons-715398"
	I0912 21:31:10.453405   13962 addons.go:69] Setting storage-provisioner=true in profile "addons-715398"
	I0912 21:31:10.453441   13962 addons.go:234] Setting addon volumesnapshots=true in "addons-715398"
	I0912 21:31:10.453416   13962 addons.go:234] Setting addon volcano=true in "addons-715398"
	I0912 21:31:10.453457   13962 addons.go:234] Setting addon storage-provisioner=true in "addons-715398"
	I0912 21:31:10.453461   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.453484   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.453500   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.453572   13962 config.go:182] Loaded profile config "addons-715398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 21:31:10.453863   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.453900   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.453911   13962 addons.go:69] Setting helm-tiller=true in profile "addons-715398"
	I0912 21:31:10.453914   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.453917   13962 addons.go:69] Setting ingress-dns=true in profile "addons-715398"
	I0912 21:31:10.453932   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.453937   13962 addons.go:234] Setting addon helm-tiller=true in "addons-715398"
	I0912 21:31:10.453942   13962 addons.go:234] Setting addon ingress-dns=true in "addons-715398"
	I0912 21:31:10.453942   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.453967   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.453977   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.454000   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454019   13962 addons.go:69] Setting metrics-server=true in profile "addons-715398"
	I0912 21:31:10.454042   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.454058   13962 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-715398"
	I0912 21:31:10.454082   13962 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-715398"
	I0912 21:31:10.454097   13962 addons.go:69] Setting registry=true in profile "addons-715398"
	I0912 21:31:10.454118   13962 addons.go:234] Setting addon registry=true in "addons-715398"
	I0912 21:31:10.454144   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.454287   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454303   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454314   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.454326   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.453903   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454416   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.453904   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.453904   13962 addons.go:69] Setting ingress=true in profile "addons-715398"
	I0912 21:31:10.454484   13962 addons.go:234] Setting addon ingress=true in "addons-715398"
	I0912 21:31:10.454501   13962 addons.go:69] Setting default-storageclass=true in profile "addons-715398"
	I0912 21:31:10.454526   13962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-715398"
	I0912 21:31:10.454529   13962 addons.go:69] Setting cloud-spanner=true in profile "addons-715398"
	I0912 21:31:10.454539   13962 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-715398"
	I0912 21:31:10.453379   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.454046   13962 addons.go:234] Setting addon metrics-server=true in "addons-715398"
	I0912 21:31:10.453387   13962 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-715398"
	I0912 21:31:10.454611   13962 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-715398"
	I0912 21:31:10.454578   13962 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-715398"
	I0912 21:31:10.454557   13962 addons.go:234] Setting addon cloud-spanner=true in "addons-715398"
	I0912 21:31:10.454704   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454739   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.454746   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.454929   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.454992   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.455094   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.455132   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.455099   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.455509   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.455556   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.455603   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.455725   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.455761   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.455816   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.455903   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.455822   13962 out.go:177] * Verifying Kubernetes components...
	I0912 21:31:10.457322   13962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:31:10.475349   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0912 21:31:10.475800   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0912 21:31:10.475836   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.476042   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0912 21:31:10.476477   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.476496   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.476510   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.478272   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42409
	I0912 21:31:10.478477   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.478497   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.478562   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.478634   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.478887   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.479176   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.479654   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.479697   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.479992   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.480015   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.480155   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.480169   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.480564   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.480604   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.481208   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.481657   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.481840   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.482346   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.482370   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.483935   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.484286   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.484305   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.486046   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.486078   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.486085   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.486107   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.486114   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.486148   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.486583   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.486619   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.489348   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0912 21:31:10.489873   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.490447   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.490471   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.490786   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.490920   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.493876   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0912 21:31:10.494478   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.494902   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.494922   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.494991   13962 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-715398"
	I0912 21:31:10.495032   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.495420   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.495461   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.496597   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0912 21:31:10.497038   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.497117   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.497636   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.497689   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.497997   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.498014   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.500452   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0912 21:31:10.506050   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.506649   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.506675   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.506906   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.507489   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.507509   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.508165   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.508817   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.508854   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.515658   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0912 21:31:10.517937   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.518631   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.518647   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.521935   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.522477   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.522515   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.528184   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0912 21:31:10.528745   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.529358   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.529371   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.529600   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.530117   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.530152   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.530192   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0912 21:31:10.530564   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.531362   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.531377   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.531735   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.532273   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.532296   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.533010   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I0912 21:31:10.533418   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.533924   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.533942   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.534370   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.534577   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.536402   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.536965   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33587
	I0912 21:31:10.537121   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0912 21:31:10.538339   13962 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:31:10.539400   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0912 21:31:10.539983   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.540061   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.540579   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.540595   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.541209   13962 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:31:10.541396   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.541943   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.541985   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.542164   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0912 21:31:10.542275   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.542982   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.542998   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.543660   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I0912 21:31:10.544178   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.544345   13962 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:31:10.544644   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.544664   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.545028   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.545198   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.545572   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.546236   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.546308   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0912 21:31:10.546851   13962 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:31:10.546879   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:31:10.546897   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.547024   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.547863   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.547881   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.548091   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0912 21:31:10.549181   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0912 21:31:10.549220   13962 addons.go:234] Setting addon default-storageclass=true in "addons-715398"
	I0912 21:31:10.549253   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:10.549268   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.549284   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.549588   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.549620   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.549648   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.549865   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.550028   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.550040   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.550251   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.550327   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.550470   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.550481   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.550538   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.550563   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.550606   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.550956   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.551128   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.551165   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.551290   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.551314   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.551498   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.551551   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.551720   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.551758   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.551794   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.551808   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.551867   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.552091   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.552150   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.552196   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42267
	I0912 21:31:10.552381   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.552414   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.552410   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.552606   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.553110   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.553126   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.553226   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.553259   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.553505   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.553741   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.554421   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.555180   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.556235   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.556429   13962 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:31:10.557422   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0912 21:31:10.557658   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:31:10.557750   13962 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:31:10.557798   13962 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:31:10.557819   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:31:10.557841   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.558588   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.559164   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.559187   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.559359   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:31:10.559379   13962 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:31:10.559400   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.559517   13962 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:31:10.559524   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:31:10.559535   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.559561   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.560323   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.561424   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0912 21:31:10.561986   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.562811   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.562857   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.563552   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.563853   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.564287   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.564482   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.564502   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.564790   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.565421   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.565448   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.565632   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.566008   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.566117   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.566137   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.566152   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.566338   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.566608   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.567030   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.567242   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.567818   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.567840   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.568070   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.568232   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.568367   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.568494   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.569570   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.569639   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0912 21:31:10.570410   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.570871   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.570890   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.571186   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.571978   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.572408   13962 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:31:10.573245   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0912 21:31:10.573649   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.573696   13962 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:31:10.573723   13962 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:31:10.573744   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.574434   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.575721   13962 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:31:10.576694   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.576715   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.577099   13962 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:31:10.577115   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:31:10.577130   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.577133   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.577308   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.577771   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.578409   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.578440   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.578642   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.579190   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.580257   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.580320   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I0912 21:31:10.580625   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.580777   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.581170   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.581219   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.581294   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.581314   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.581478   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.581615   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.581742   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.581793   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.581815   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.581856   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.582124   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.582250   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.583022   13962 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:31:10.584126   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.585417   13962 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:31:10.585418   13962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:31:10.586134   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0912 21:31:10.586583   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.586986   13962 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:31:10.587004   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:31:10.587020   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.587033   13962 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:31:10.587046   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:31:10.587062   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.587068   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.587081   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.587370   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.587819   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:10.587849   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:10.590737   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0912 21:31:10.592551   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0912 21:31:10.592743   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.592881   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.593210   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.593317   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.593336   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.593506   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.593522   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.593735   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.593863   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.594063   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.594431   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.594624   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.594627   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.594646   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.595188   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.595415   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.595717   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.595961   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.596077   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.596790   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.596872   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0912 21:31:10.597024   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0912 21:31:10.597575   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.597610   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.597640   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.597648   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.597768   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.597901   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:31:10.597922   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.597949   13962 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:31:10.598205   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.598368   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.599127   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.599151   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.599259   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.599278   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.599671   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.599673   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.599873   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.600050   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.600103   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:31:10.600150   13962 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:31:10.601477   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:31:10.601485   13962 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:31:10.601686   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.602062   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.602933   13962 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:31:10.602952   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:31:10.602964   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.603550   13962 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:31:10.603558   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:31:10.603633   13962 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:31:10.604146   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0912 21:31:10.604650   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.605205   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.605224   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.605468   13962 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:31:10.605480   13962 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:31:10.605495   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.606255   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.606461   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:31:10.606510   13962 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:31:10.606728   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.606770   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.607145   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.607164   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.607300   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.607571   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.607713   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.607861   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.608183   13962 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:31:10.608191   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:31:10.608201   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.609463   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:31:10.610218   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.610340   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0912 21:31:10.610349   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.610929   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.610954   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.611190   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.611268   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.611287   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0912 21:31:10.611479   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.611615   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.611628   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.611709   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:10.611784   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.611929   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.611919   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.612069   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.612467   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:10.612485   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:10.612497   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:31:10.612608   13962 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:31:10.613324   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.613490   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:10.613763   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:10.613830   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.613850   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.613949   13962 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:31:10.613965   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:31:10.613981   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.614019   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.614175   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.614294   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.614381   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.614760   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.614995   13962 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:31:10.615004   13962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:31:10.615014   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.615310   13962 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:31:10.615556   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:10.616492   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:31:10.616516   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:31:10.616537   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.617348   13962 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:31:10.617967   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.617984   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.618351   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.618373   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.618385   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.618393   13962 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:31:10.618400   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.618401   13962 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:31:10.618411   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:10.618917   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.618932   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.619056   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.619093   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.619173   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.619260   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.619264   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.619386   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.621589   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.622093   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.622117   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.622129   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.622329   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.622499   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.622552   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:10.622576   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:10.622777   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.622822   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:10.623013   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:10.623024   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:10.623197   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:10.623356   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:11.025849   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:31:11.025870   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:31:11.089209   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:31:11.089241   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:31:11.212251   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:31:11.237156   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:31:11.286634   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:31:11.301701   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:31:11.439321   13962 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:31:11.439345   13962 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:31:11.447771   13962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:31:11.447912   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:31:11.449849   13962 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:31:11.449875   13962 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:31:11.485632   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:31:11.489950   13962 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:31:11.489966   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:31:11.494766   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:31:11.494785   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:31:11.536629   13962 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:31:11.536653   13962 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:31:11.602682   13962 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:31:11.602712   13962 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:31:11.665187   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:31:11.715853   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:31:11.723250   13962 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:31:11.723269   13962 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:31:11.768193   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:31:11.891570   13962 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:31:11.891592   13962 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:31:11.912816   13962 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:31:11.912840   13962 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:31:11.956270   13962 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:31:11.956294   13962 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:31:11.963527   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:31:11.963545   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:31:11.963795   13962 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:31:11.963810   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:31:12.003178   13962 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:31:12.003207   13962 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:31:12.098857   13962 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:31:12.098881   13962 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:31:12.140967   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:31:12.141997   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:31:12.206743   13962 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:31:12.206769   13962 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:31:12.242239   13962 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:31:12.242271   13962 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:31:12.243098   13962 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:31:12.243114   13962 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:31:12.289059   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:31:12.289085   13962 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:31:12.331266   13962 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:31:12.331289   13962 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:31:12.423573   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:31:12.448063   13962 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:31:12.448082   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:31:12.490607   13962 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:31:12.490637   13962 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:31:12.502922   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:31:12.502943   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:31:12.597034   13962 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:31:12.597057   13962 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:31:12.740115   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:31:12.761723   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:31:12.761751   13962 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:31:12.766301   13962 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:31:12.766318   13962 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:31:12.848325   13962 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:31:12.848346   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:31:13.001665   13962 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:31:13.001722   13962 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:31:13.016081   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:31:13.016102   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:31:13.125961   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:31:13.227820   13962 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:31:13.227846   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:31:13.265224   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:31:13.265247   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:31:13.460766   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:31:13.576280   13962 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:31:13.576306   13962 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:31:13.899812   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:31:17.601260   13962 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:31:17.601294   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:17.604705   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:17.605175   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:17.605202   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:17.605443   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:17.605642   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:17.605815   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:17.605994   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:18.154297   13962 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:31:18.437954   13962 addons.go:234] Setting addon gcp-auth=true in "addons-715398"
	I0912 21:31:18.438007   13962 host.go:66] Checking if "addons-715398" exists ...
	I0912 21:31:18.438341   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:18.438377   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:18.454565   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0912 21:31:18.455013   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:18.455505   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:18.455520   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:18.455861   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:18.456467   13962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 21:31:18.456499   13962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:31:18.471765   13962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39601
	I0912 21:31:18.472182   13962 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:31:18.472672   13962 main.go:141] libmachine: Using API Version  1
	I0912 21:31:18.472698   13962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:31:18.473010   13962 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:31:18.473189   13962 main.go:141] libmachine: (addons-715398) Calling .GetState
	I0912 21:31:18.474821   13962 main.go:141] libmachine: (addons-715398) Calling .DriverName
	I0912 21:31:18.475050   13962 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:31:18.475069   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHHostname
	I0912 21:31:18.478107   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:18.478567   13962 main.go:141] libmachine: (addons-715398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:cc:cd", ip: ""} in network mk-addons-715398: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:40 +0000 UTC Type:0 Mac:52:54:00:20:cc:cd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:addons-715398 Clientid:01:52:54:00:20:cc:cd}
	I0912 21:31:18.478598   13962 main.go:141] libmachine: (addons-715398) DBG | domain addons-715398 has defined IP address 192.168.39.77 and MAC address 52:54:00:20:cc:cd in network mk-addons-715398
	I0912 21:31:18.478937   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHPort
	I0912 21:31:18.479128   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHKeyPath
	I0912 21:31:18.479309   13962 main.go:141] libmachine: (addons-715398) Calling .GetSSHUsername
	I0912 21:31:18.479467   13962 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/addons-715398/id_rsa Username:docker}
	I0912 21:31:19.704864   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.467659866s)
	I0912 21:31:19.704904   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.418237959s)
	I0912 21:31:19.704921   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.704935   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.704951   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.704968   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.705015   13962 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.25722264s)
	I0912 21:31:19.705020   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.403271286s)
	I0912 21:31:19.705053   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.705063   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.705090   13962 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.257151776s)
	I0912 21:31:19.705115   13962 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:31:19.705253   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.705285   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.705302   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.705311   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.705319   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.705456   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.705475   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.705484   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.705492   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.705574   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.706013   13962 node_ready.go:35] waiting up to 6m0s for node "addons-715398" to be "Ready" ...
	I0912 21:31:19.706247   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.706257   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.706271   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.706312   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.706333   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.706349   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.706365   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.706665   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.706705   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.706713   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.706908   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.494629481s)
	I0912 21:31:19.706929   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.706938   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.706991   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.706998   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.707160   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.707176   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.707188   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.707197   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.707208   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.707433   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:19.707470   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.707483   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.707493   13962 addons.go:475] Verifying addon ingress=true in "addons-715398"
	I0912 21:31:19.709997   13962 out.go:177] * Verifying ingress addon...
	I0912 21:31:19.712091   13962 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:31:19.766391   13962 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:31:19.766420   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.787513   13962 node_ready.go:49] node "addons-715398" has status "Ready":"True"
	I0912 21:31:19.787547   13962 node_ready.go:38] duration metric: took 81.497423ms for node "addons-715398" to be "Ready" ...
	I0912 21:31:19.787559   13962 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:31:19.817751   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.817771   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.818083   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.818100   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	W0912 21:31:19.818183   13962 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0912 21:31:19.848352   13962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:19.853686   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:19.853715   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:19.853995   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:19.854045   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:19.854050   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:20.234006   13962 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-715398" context rescaled to 1 replicas
	I0912 21:31:20.246612   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.789181   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.330349   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.777604   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.932957   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:22.341272   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.495783   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.830562109s)
	I0912 21:31:22.495837   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.495851   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.495847   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.779967732s)
	I0912 21:31:22.495873   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.010216834s)
	I0912 21:31:22.495901   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.495895   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.727678058s)
	I0912 21:31:22.495960   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.495985   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.495913   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496015   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.353996289s)
	I0912 21:31:22.495971   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.354960389s)
	I0912 21:31:22.496040   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496056   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496058   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.495915   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496067   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496077   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496106   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.072497415s)
	I0912 21:31:22.496125   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496134   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496156   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.755986788s)
	I0912 21:31:22.496172   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496184   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496242   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.370250437s)
	I0912 21:31:22.496276   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.496291   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.496297   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	W0912 21:31:22.496269   13962 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:31:22.496345   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.496352   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.496354   13962 retry.go:31] will retry after 289.376419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:31:22.496358   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.496331   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.496370   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496377   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496378   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.496386   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.496393   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496400   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496377   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.496428   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496435   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496504   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.496534   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.496543   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.496576   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.496683   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.496715   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.496723   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.497831   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.497909   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.497921   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.497936   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.497941   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498286   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.498308   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.037508675s)
	I0912 21:31:22.498321   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.498328   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498334   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.498344   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.498385   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.498398   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498583   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.498605   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.498612   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498619   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.498626   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.498640   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.498651   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498884   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.498894   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.498951   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.498957   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.498965   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.498971   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.499779   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.499812   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.499830   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.499837   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.499962   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.499985   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.500211   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.500221   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.500229   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.500236   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.500276   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.500287   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.500411   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.500435   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.500480   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.500572   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.500608   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.500626   13962 addons.go:475] Verifying addon metrics-server=true in "addons-715398"
	I0912 21:31:22.500774   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.500789   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.500797   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:22.500804   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:22.501345   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:22.501376   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:22.501383   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:22.501395   13962 addons.go:475] Verifying addon registry=true in "addons-715398"
	I0912 21:31:22.502303   13962 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-715398 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:31:22.503556   13962 out.go:177] * Verifying registry addon...
	I0912 21:31:22.505660   13962 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:31:22.547197   13962 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:31:22.547216   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:22.785881   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:31:22.837580   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.026066   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.039719   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.1398507s)
	I0912 21:31:23.039780   13962 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.564708492s)
	I0912 21:31:23.039782   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:23.039964   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:23.040233   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:23.040271   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:23.040288   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:23.040301   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:23.040312   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:23.040578   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:23.040583   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:23.040594   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:23.040610   13962 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-715398"
	I0912 21:31:23.042401   13962 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:31:23.042414   13962 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:31:23.044169   13962 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:31:23.044849   13962 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:31:23.045369   13962 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:31:23.045390   13962 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:31:23.080137   13962 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:31:23.080169   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.150917   13962 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:31:23.150953   13962 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:31:23.211335   13962 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:31:23.211360   13962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:31:23.268666   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.280090   13962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:31:23.511373   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.613245   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.719717   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.011555   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.050849   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.216675   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.356162   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:24.484296   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.698365997s)
	I0912 21:31:24.484344   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:24.484359   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:24.484672   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:24.484690   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:24.484714   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:24.484715   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:24.484723   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:24.485011   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:24.485064   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:24.485079   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:24.535941   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.579085   13962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.298953122s)
	I0912 21:31:24.579145   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:24.579160   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:24.579473   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:24.579493   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:24.579503   13962 main.go:141] libmachine: Making call to close driver server
	I0912 21:31:24.579511   13962 main.go:141] libmachine: (addons-715398) Calling .Close
	I0912 21:31:24.579760   13962 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:31:24.579776   13962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:31:24.579798   13962 main.go:141] libmachine: (addons-715398) DBG | Closing plugin on server side
	I0912 21:31:24.581577   13962 addons.go:475] Verifying addon gcp-auth=true in "addons-715398"
	I0912 21:31:24.584040   13962 out.go:177] * Verifying gcp-auth addon...
	I0912 21:31:24.586397   13962 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:31:24.630110   13962 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:31:24.631567   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.731721   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.011074   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:25.050166   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:25.275948   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.509906   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:25.549256   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:25.717692   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.008810   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.049086   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.218290   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.511610   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.549147   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.717179   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.853738   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:27.009353   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:27.049410   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.216355   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.511194   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:27.549548   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.716857   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.146190   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:28.149294   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.336677   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.516603   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:28.550567   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.716113   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.854954   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:29.009724   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:29.049241   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.216829   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.510170   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:29.551688   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.717093   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.010129   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:30.050186   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.217058   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.508886   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:30.549261   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.717355   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.856461   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:31.010644   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:31.050122   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.216592   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.511331   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:31.550649   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.716986   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.009622   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:32.049327   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.217152   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.509472   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:32.550231   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.716353   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.857462   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:33.010420   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:33.048806   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.216643   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.510040   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:33.549924   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.716222   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.009427   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:34.049380   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.216091   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.510215   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:34.550049   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.716287   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.010796   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:35.049832   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.216580   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.354702   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:35.510144   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:35.550101   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.720184   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.009924   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:36.050399   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.217310   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.509731   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:36.549587   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.716870   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.009888   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:37.049970   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.216840   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.359202   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:37.509333   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:37.549264   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.716559   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.009645   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:38.050432   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.217075   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.510139   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:38.550701   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.716331   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.010885   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:39.051576   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:39.216723   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.510045   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:39.550393   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:39.729683   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.854988   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:40.013243   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:40.049282   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.217484   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.510402   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:40.550260   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.716866   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.010335   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:41.049181   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.216839   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.510446   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:41.549428   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.716605   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.010764   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:42.048973   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.216901   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.357509   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:42.511232   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:42.550002   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.717333   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.009989   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:43.049616   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.216539   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.510349   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:43.549216   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.716321   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.009789   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:44.049373   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.216396   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.510839   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:44.549734   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.716537   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.857339   13962 pod_ready.go:103] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:45.319634   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:45.320661   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.321057   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.511960   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:45.550270   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.716675   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.009879   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:46.049683   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.217072   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.511532   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:46.549788   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.715779   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.012866   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:47.049836   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.216137   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.353584   13962 pod_ready.go:93] pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.353604   13962 pod_ready.go:82] duration metric: took 27.505228331s for pod "coredns-7c65d6cfc9-2kvmx" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.353613   13962 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vt575" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.355182   13962 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-vt575" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vt575" not found
	I0912 21:31:47.355201   13962 pod_ready.go:82] duration metric: took 1.58322ms for pod "coredns-7c65d6cfc9-vt575" in "kube-system" namespace to be "Ready" ...
	E0912 21:31:47.355210   13962 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-vt575" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vt575" not found
	I0912 21:31:47.355216   13962 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.358987   13962 pod_ready.go:93] pod "etcd-addons-715398" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.359003   13962 pod_ready.go:82] duration metric: took 3.781145ms for pod "etcd-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.359010   13962 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.364736   13962 pod_ready.go:93] pod "kube-apiserver-addons-715398" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.364753   13962 pod_ready.go:82] duration metric: took 5.737019ms for pod "kube-apiserver-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.364761   13962 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.371129   13962 pod_ready.go:93] pod "kube-controller-manager-addons-715398" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.371145   13962 pod_ready.go:82] duration metric: took 6.378471ms for pod "kube-controller-manager-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.371155   13962 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vl2cm" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.509542   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:47.550066   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.551171   13962 pod_ready.go:93] pod "kube-proxy-vl2cm" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.551188   13962 pod_ready.go:82] duration metric: took 180.027668ms for pod "kube-proxy-vl2cm" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.551197   13962 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.716771   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.952795   13962 pod_ready.go:93] pod "kube-scheduler-addons-715398" in "kube-system" namespace has status "Ready":"True"
	I0912 21:31:47.952812   13962 pod_ready.go:82] duration metric: took 401.610161ms for pod "kube-scheduler-addons-715398" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:47.952822   13962 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace to be "Ready" ...
	I0912 21:31:48.010494   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:48.049683   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.216335   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.510114   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:48.550015   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.716866   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.010008   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:49.050063   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.216607   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.510656   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:49.551517   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.716589   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.204370   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:50.204921   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.207337   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:50.215119   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.510756   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:50.551518   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.717251   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.009874   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:51.051692   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.216146   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.509716   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:51.549231   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.717323   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.009576   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:52.050058   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.216670   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.459551   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:52.510116   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:52.550199   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.717823   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.009863   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:53.049345   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.216703   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.510525   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:53.551448   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.717305   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.009715   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:54.049661   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.216923   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.460527   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:54.513344   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:54.550668   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.716953   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:55.009952   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:55.049326   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.216307   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:55.513599   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:55.615441   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.722785   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:56.009930   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:56.049749   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.217159   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:56.462744   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:56.510046   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:56.549428   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.717034   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:57.010677   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:57.050255   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.216227   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:57.510096   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:57.549526   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.716301   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:58.009176   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:58.051281   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.216942   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:58.510790   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:58.549200   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.716421   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:58.959233   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:31:59.011960   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:59.049964   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.217061   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:59.513346   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:59.550183   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.716213   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:00.016194   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:00.116283   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:00.217045   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:00.509924   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:00.549858   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:00.716348   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:00.959829   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:32:01.010579   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:01.049754   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.216245   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:01.511083   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:01.552760   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.716945   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:02.010186   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:02.049879   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:02.216585   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:02.509603   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:02.549320   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:02.716633   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:03.009577   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:03.049452   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.216264   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:03.459904   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:32:03.511025   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:03.550367   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.716910   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:04.009723   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:04.049928   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.217040   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:04.510024   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:04.549989   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.717456   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:05.010417   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:05.050667   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.216766   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:05.509925   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:05.553068   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.716640   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:05.959394   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:32:06.012385   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:06.049059   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.217350   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:06.510660   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:06.549641   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.716859   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:07.386228   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:07.386638   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:07.386933   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:07.524623   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:07.549334   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:07.716586   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:08.009463   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:08.048902   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:08.216860   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:08.459471   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:32:08.509542   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:08.549194   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:08.715942   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:09.009926   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:09.049585   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:09.217560   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:09.509507   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:09.549914   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:09.716988   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:10.009582   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:32:10.055452   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:10.216568   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:10.509816   13962 kapi.go:107] duration metric: took 48.004152804s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:32:10.549621   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:10.716815   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:10.959662   13962 pod_ready.go:103] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"False"
	I0912 21:32:11.053089   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:11.217274   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:11.550107   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:11.716425   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:12.057408   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:12.255696   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:12.459222   13962 pod_ready.go:93] pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace has status "Ready":"True"
	I0912 21:32:12.459241   13962 pod_ready.go:82] duration metric: took 24.506412457s for pod "metrics-server-84c5f94fbc-szb47" in "kube-system" namespace to be "Ready" ...
	I0912 21:32:12.459249   13962 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b777s" in "kube-system" namespace to be "Ready" ...
	I0912 21:32:12.463943   13962 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-b777s" in "kube-system" namespace has status "Ready":"True"
	I0912 21:32:12.463964   13962 pod_ready.go:82] duration metric: took 4.707675ms for pod "nvidia-device-plugin-daemonset-b777s" in "kube-system" namespace to be "Ready" ...
	I0912 21:32:12.463983   13962 pod_ready.go:39] duration metric: took 52.676409677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:32:12.464002   13962 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:32:12.464034   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 21:32:12.464085   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 21:32:12.516362   13962 cri.go:89] found id: "92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:12.516383   13962 cri.go:89] found id: ""
	I0912 21:32:12.516391   13962 logs.go:276] 1 containers: [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1]
	I0912 21:32:12.516441   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.521412   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 21:32:12.521480   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 21:32:12.549782   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:12.562167   13962 cri.go:89] found id: "769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:12.562184   13962 cri.go:89] found id: ""
	I0912 21:32:12.562193   13962 logs.go:276] 1 containers: [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6]
	I0912 21:32:12.562249   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.566502   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 21:32:12.566563   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 21:32:12.610278   13962 cri.go:89] found id: "197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:12.610304   13962 cri.go:89] found id: ""
	I0912 21:32:12.610314   13962 logs.go:276] 1 containers: [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d]
	I0912 21:32:12.610376   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.615787   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 21:32:12.615849   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 21:32:12.655472   13962 cri.go:89] found id: "22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:12.655495   13962 cri.go:89] found id: ""
	I0912 21:32:12.655502   13962 logs.go:276] 1 containers: [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146]
	I0912 21:32:12.655568   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.659628   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 21:32:12.659685   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 21:32:12.696017   13962 cri.go:89] found id: "dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:12.696039   13962 cri.go:89] found id: ""
	I0912 21:32:12.696046   13962 logs.go:276] 1 containers: [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149]
	I0912 21:32:12.696092   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.700171   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 21:32:12.700240   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 21:32:12.717131   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:12.738745   13962 cri.go:89] found id: "60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:12.738766   13962 cri.go:89] found id: ""
	I0912 21:32:12.738774   13962 logs.go:276] 1 containers: [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42]
	I0912 21:32:12.738823   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:12.742998   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 21:32:12.743065   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 21:32:12.779751   13962 cri.go:89] found id: ""
	I0912 21:32:12.779776   13962 logs.go:276] 0 containers: []
	W0912 21:32:12.779785   13962 logs.go:278] No container was found matching "kindnet"
	I0912 21:32:12.779794   13962 logs.go:123] Gathering logs for container status ...
	I0912 21:32:12.779806   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 21:32:12.836965   13962 logs.go:123] Gathering logs for kubelet ...
	I0912 21:32:12.836996   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 21:32:12.892290   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.393815    1192 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:12.892476   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:12.892645   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:12.892813   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:12.894101   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:12.894342   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:12.914058   13962 logs.go:123] Gathering logs for kube-controller-manager [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42] ...
	I0912 21:32:12.914095   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:12.980598   13962 logs.go:123] Gathering logs for kube-apiserver [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1] ...
	I0912 21:32:12.980637   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:13.042989   13962 logs.go:123] Gathering logs for etcd [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6] ...
	I0912 21:32:13.043024   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:13.049216   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:13.092928   13962 logs.go:123] Gathering logs for coredns [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d] ...
	I0912 21:32:13.092960   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:13.147355   13962 logs.go:123] Gathering logs for kube-scheduler [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146] ...
	I0912 21:32:13.147385   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:13.192669   13962 logs.go:123] Gathering logs for kube-proxy [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149] ...
	I0912 21:32:13.192703   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:13.216626   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:13.233594   13962 logs.go:123] Gathering logs for containerd ...
	I0912 21:32:13.233618   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 21:32:13.298217   13962 logs.go:123] Gathering logs for dmesg ...
	I0912 21:32:13.298254   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 21:32:13.316142   13962 logs.go:123] Gathering logs for describe nodes ...
	I0912 21:32:13.316166   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 21:32:13.452514   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:13.452543   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 21:32:13.452590   13962 out.go:270] X Problems detected in kubelet:
	W0912 21:32:13.452599   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:13.452611   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:13.452619   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:13.452624   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:13.452631   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:13.452636   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:13.452642   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:32:13.549417   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:13.716760   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:14.049461   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:14.218352   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:14.550996   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:14.717090   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:15.050531   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:15.217246   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:15.552048   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:15.716768   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:16.050540   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:16.216610   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:16.549898   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:16.716540   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:17.051761   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:17.217468   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:17.549618   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:17.716342   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:18.049621   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:18.216255   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:18.549421   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:18.716656   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:19.049954   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:19.216951   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:19.550625   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:19.716476   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:20.048819   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:20.218774   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:20.549263   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:20.716646   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:21.052063   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:21.216773   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:21.549894   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:21.717577   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:22.049403   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:22.216969   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:22.550838   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:22.717031   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:23.050093   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:23.218023   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:23.453480   13962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:32:23.478084   13962 api_server.go:72] duration metric: took 1m13.024896611s to wait for apiserver process to appear ...
	I0912 21:32:23.478108   13962 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:32:23.478146   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 21:32:23.478203   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 21:32:23.525412   13962 cri.go:89] found id: "92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:23.525430   13962 cri.go:89] found id: ""
	I0912 21:32:23.525438   13962 logs.go:276] 1 containers: [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1]
	I0912 21:32:23.525502   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.530335   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 21:32:23.530391   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 21:32:23.550578   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:23.573710   13962 cri.go:89] found id: "769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:23.573732   13962 cri.go:89] found id: ""
	I0912 21:32:23.573740   13962 logs.go:276] 1 containers: [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6]
	I0912 21:32:23.573804   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.578028   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 21:32:23.578139   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 21:32:23.622724   13962 cri.go:89] found id: "197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:23.622748   13962 cri.go:89] found id: ""
	I0912 21:32:23.622757   13962 logs.go:276] 1 containers: [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d]
	I0912 21:32:23.622816   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.626986   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 21:32:23.627038   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 21:32:23.669053   13962 cri.go:89] found id: "22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:23.669075   13962 cri.go:89] found id: ""
	I0912 21:32:23.669082   13962 logs.go:276] 1 containers: [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146]
	I0912 21:32:23.669130   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.673545   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 21:32:23.673604   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 21:32:23.707615   13962 cri.go:89] found id: "dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:23.707638   13962 cri.go:89] found id: ""
	I0912 21:32:23.707647   13962 logs.go:276] 1 containers: [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149]
	I0912 21:32:23.707732   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.712212   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 21:32:23.712282   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 21:32:23.716715   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:23.751185   13962 cri.go:89] found id: "60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:23.751205   13962 cri.go:89] found id: ""
	I0912 21:32:23.751212   13962 logs.go:276] 1 containers: [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42]
	I0912 21:32:23.751256   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:23.755502   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 21:32:23.755562   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 21:32:23.791233   13962 cri.go:89] found id: ""
	I0912 21:32:23.791265   13962 logs.go:276] 0 containers: []
	W0912 21:32:23.791277   13962 logs.go:278] No container was found matching "kindnet"
	I0912 21:32:23.791288   13962 logs.go:123] Gathering logs for kubelet ...
	I0912 21:32:23.791301   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 21:32:23.840653   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.393815    1192 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:23.840878   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:23.841024   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:23.841189   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:23.842402   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:23.842566   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:23.861431   13962 logs.go:123] Gathering logs for dmesg ...
	I0912 21:32:23.861457   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 21:32:23.876861   13962 logs.go:123] Gathering logs for kube-apiserver [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1] ...
	I0912 21:32:23.876887   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:23.937544   13962 logs.go:123] Gathering logs for kube-scheduler [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146] ...
	I0912 21:32:23.937583   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:23.989206   13962 logs.go:123] Gathering logs for kube-proxy [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149] ...
	I0912 21:32:23.989238   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:24.030459   13962 logs.go:123] Gathering logs for kube-controller-manager [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42] ...
	I0912 21:32:24.030498   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:24.050560   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:24.110840   13962 logs.go:123] Gathering logs for describe nodes ...
	I0912 21:32:24.110874   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 21:32:24.216021   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:24.238401   13962 logs.go:123] Gathering logs for etcd [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6] ...
	I0912 21:32:24.238434   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:24.296418   13962 logs.go:123] Gathering logs for coredns [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d] ...
	I0912 21:32:24.296446   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:24.335661   13962 logs.go:123] Gathering logs for containerd ...
	I0912 21:32:24.335686   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 21:32:24.405974   13962 logs.go:123] Gathering logs for container status ...
	I0912 21:32:24.406013   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 21:32:24.452231   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:24.452255   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 21:32:24.452306   13962 out.go:270] X Problems detected in kubelet:
	W0912 21:32:24.452316   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:24.452323   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:24.452332   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:24.452347   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:24.452360   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:24.452368   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:24.452377   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:32:24.549607   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:24.719523   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:25.049806   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:25.216081   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:25.549692   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:25.717389   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:26.050096   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:26.216711   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:26.549487   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:26.716600   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:27.049286   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:27.217453   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:27.552254   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:27.717801   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:28.049769   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:28.216462   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:28.549632   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:28.716528   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:29.049409   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:29.216510   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:29.549958   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:29.719191   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:30.050389   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:30.218143   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:30.549571   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:30.717119   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:31.049548   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:31.216217   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:31.548955   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:31.716290   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:32.049397   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:32.216341   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:32.553664   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:32.717506   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:33.049849   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:33.217448   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:33.550130   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:33.717549   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:34.049895   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:34.216729   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:34.453419   13962 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0912 21:32:34.458967   13962 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0912 21:32:34.460059   13962 api_server.go:141] control plane version: v1.31.1
	I0912 21:32:34.460081   13962 api_server.go:131] duration metric: took 10.981966125s to wait for apiserver health ...
	I0912 21:32:34.460088   13962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:32:34.460107   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 21:32:34.460152   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 21:32:34.528651   13962 cri.go:89] found id: "92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:34.528678   13962 cri.go:89] found id: ""
	I0912 21:32:34.528687   13962 logs.go:276] 1 containers: [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1]
	I0912 21:32:34.528748   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.533029   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 21:32:34.533092   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 21:32:34.558458   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:34.581999   13962 cri.go:89] found id: "769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:34.582024   13962 cri.go:89] found id: ""
	I0912 21:32:34.582033   13962 logs.go:276] 1 containers: [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6]
	I0912 21:32:34.582091   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.587618   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 21:32:34.587684   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 21:32:34.632255   13962 cri.go:89] found id: "197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:34.632279   13962 cri.go:89] found id: ""
	I0912 21:32:34.632289   13962 logs.go:276] 1 containers: [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d]
	I0912 21:32:34.632346   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.636694   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 21:32:34.636756   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 21:32:34.679280   13962 cri.go:89] found id: "22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:34.679306   13962 cri.go:89] found id: ""
	I0912 21:32:34.679315   13962 logs.go:276] 1 containers: [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146]
	I0912 21:32:34.679365   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.684540   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 21:32:34.684595   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 21:32:34.716065   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:34.722387   13962 cri.go:89] found id: "dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:34.722407   13962 cri.go:89] found id: ""
	I0912 21:32:34.722415   13962 logs.go:276] 1 containers: [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149]
	I0912 21:32:34.722456   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.727018   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 21:32:34.727081   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 21:32:34.765155   13962 cri.go:89] found id: "60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:34.765174   13962 cri.go:89] found id: ""
	I0912 21:32:34.765182   13962 logs.go:276] 1 containers: [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42]
	I0912 21:32:34.765227   13962 ssh_runner.go:195] Run: which crictl
	I0912 21:32:34.771335   13962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 21:32:34.771409   13962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 21:32:34.808419   13962 cri.go:89] found id: ""
	I0912 21:32:34.808446   13962 logs.go:276] 0 containers: []
	W0912 21:32:34.808458   13962 logs.go:278] No container was found matching "kindnet"
	I0912 21:32:34.808469   13962 logs.go:123] Gathering logs for dmesg ...
	I0912 21:32:34.808484   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 21:32:34.822941   13962 logs.go:123] Gathering logs for kube-apiserver [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1] ...
	I0912 21:32:34.822967   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1"
	I0912 21:32:34.884810   13962 logs.go:123] Gathering logs for etcd [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6] ...
	I0912 21:32:34.884935   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6"
	I0912 21:32:34.941144   13962 logs.go:123] Gathering logs for coredns [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d] ...
	I0912 21:32:34.941174   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d"
	I0912 21:32:34.988567   13962 logs.go:123] Gathering logs for kube-controller-manager [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42] ...
	I0912 21:32:34.988599   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42"
	I0912 21:32:35.050241   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:35.064488   13962 logs.go:123] Gathering logs for kubelet ...
	I0912 21:32:35.064536   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 21:32:35.117319   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.393815    1192 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:35.117527   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:35.117705   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:35.117903   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:35.119136   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:35.119313   13962 logs.go:138] Found kubelet problem: Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:35.141018   13962 logs.go:123] Gathering logs for describe nodes ...
	I0912 21:32:35.141058   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 21:32:35.216384   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:35.276893   13962 logs.go:123] Gathering logs for kube-scheduler [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146] ...
	I0912 21:32:35.276924   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146"
	I0912 21:32:35.321431   13962 logs.go:123] Gathering logs for kube-proxy [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149] ...
	I0912 21:32:35.321472   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149"
	I0912 21:32:35.360375   13962 logs.go:123] Gathering logs for containerd ...
	I0912 21:32:35.360411   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 21:32:35.428263   13962 logs.go:123] Gathering logs for container status ...
	I0912 21:32:35.428296   13962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 21:32:35.486540   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:35.486565   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 21:32:35.486613   13962 out.go:270] X Problems detected in kubelet:
	W0912 21:32:35.486628   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.393987    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:35.486639   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: W0912 21:31:16.394066    1192 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-715398' and this object
	W0912 21:32:35.486648   13962 out.go:270]   Sep 12 21:31:16 addons-715398 kubelet[1192]: E0912 21:31:16.394113    1192 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	W0912 21:32:35.486658   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: W0912 21:31:17.071664    1192 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-715398" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-715398' and this object
	W0912 21:32:35.486673   13962 out.go:270]   Sep 12 21:31:17 addons-715398 kubelet[1192]: E0912 21:31:17.071692    1192 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-715398\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-715398' and this object" logger="UnhandledError"
	I0912 21:32:35.486680   13962 out.go:358] Setting ErrFile to fd 2...
	I0912 21:32:35.486688   13962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:32:35.549778   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:35.716280   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:36.049841   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:36.216591   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:36.549761   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:36.716886   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:37.050199   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:37.234404   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:37.549821   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:37.925017   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:38.049412   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:38.218109   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:38.551718   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:38.717794   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:39.049143   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:39.216815   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:39.550469   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:39.721870   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:40.056512   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:40.225416   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:40.550113   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:40.717728   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:41.048971   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:41.216428   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:41.548798   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:41.716412   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:42.051233   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:42.216676   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:42.550158   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:42.717232   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:43.050062   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:43.216406   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:43.549853   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:43.716480   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:44.050571   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:44.216121   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:44.554099   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:44.717525   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:45.287807   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:45.288040   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:45.502417   13962 system_pods.go:59] 18 kube-system pods found
	I0912 21:32:45.502446   13962 system_pods.go:61] "coredns-7c65d6cfc9-2kvmx" [9c5337a9-4ce1-4d8a-9dd0-e963ad695469] Running
	I0912 21:32:45.502457   13962 system_pods.go:61] "csi-hostpath-attacher-0" [f2aadfa9-2f42-4217-a8e5-03048c45cda4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:32:45.502464   13962 system_pods.go:61] "csi-hostpath-resizer-0" [1231ad9a-e80b-463f-8816-288fbb247114] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:32:45.502471   13962 system_pods.go:61] "csi-hostpathplugin-xc57j" [bcca1da2-f2cb-48ec-af9a-d85f95245f78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:32:45.502476   13962 system_pods.go:61] "etcd-addons-715398" [e08d95e4-a4d4-4831-8cd4-364d2dd8f2d0] Running
	I0912 21:32:45.502480   13962 system_pods.go:61] "kube-apiserver-addons-715398" [92589e8d-f163-4a53-841d-dd8cac3b995f] Running
	I0912 21:32:45.502484   13962 system_pods.go:61] "kube-controller-manager-addons-715398" [6317a4c6-6a1f-4970-bbf0-aff9854f0743] Running
	I0912 21:32:45.502488   13962 system_pods.go:61] "kube-ingress-dns-minikube" [371fc3e5-942e-43a4-aa43-19dc288be015] Running
	I0912 21:32:45.502492   13962 system_pods.go:61] "kube-proxy-vl2cm" [372353b3-5f28-436f-ba52-ceec8711abe0] Running
	I0912 21:32:45.502495   13962 system_pods.go:61] "kube-scheduler-addons-715398" [5c396abd-fcb9-48c9-aa1b-5a94dd074ff0] Running
	I0912 21:32:45.502501   13962 system_pods.go:61] "metrics-server-84c5f94fbc-szb47" [23fcc711-0ca1-483e-ba96-3ac1a0ad6262] Running
	I0912 21:32:45.502504   13962 system_pods.go:61] "nvidia-device-plugin-daemonset-b777s" [7331841e-12ab-47ba-bd0a-65f840b6ec72] Running
	I0912 21:32:45.502510   13962 system_pods.go:61] "registry-66c9cd494c-zk84l" [49ab21ec-c966-42ee-8094-c36eab0ac340] Running
	I0912 21:32:45.502513   13962 system_pods.go:61] "registry-proxy-ljcbv" [024b69e7-bbbc-4ada-a1a0-2a1141baeac9] Running
	I0912 21:32:45.502516   13962 system_pods.go:61] "snapshot-controller-56fcc65765-cs8t4" [81cb95d9-d344-47c3-8c54-6a9c03aab2ab] Running
	I0912 21:32:45.502522   13962 system_pods.go:61] "snapshot-controller-56fcc65765-n8fkr" [ceb93783-f135-4817-b6f6-120bbba0afe8] Running
	I0912 21:32:45.502527   13962 system_pods.go:61] "storage-provisioner" [67eeea4d-71b3-4af7-a01f-ecbf663c40f4] Running
	I0912 21:32:45.502532   13962 system_pods.go:61] "tiller-deploy-b48cc5f79-g2nw5" [79597e16-af9b-4833-84f6-c20a8280464f] Running
	I0912 21:32:45.502537   13962 system_pods.go:74] duration metric: took 11.042444644s to wait for pod list to return data ...
	I0912 21:32:45.502544   13962 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:32:45.505646   13962 default_sa.go:45] found service account: "default"
	I0912 21:32:45.505685   13962 default_sa.go:55] duration metric: took 3.119901ms for default service account to be created ...
	I0912 21:32:45.505700   13962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:32:45.513024   13962 system_pods.go:86] 18 kube-system pods found
	I0912 21:32:45.513048   13962 system_pods.go:89] "coredns-7c65d6cfc9-2kvmx" [9c5337a9-4ce1-4d8a-9dd0-e963ad695469] Running
	I0912 21:32:45.513057   13962 system_pods.go:89] "csi-hostpath-attacher-0" [f2aadfa9-2f42-4217-a8e5-03048c45cda4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:32:45.513063   13962 system_pods.go:89] "csi-hostpath-resizer-0" [1231ad9a-e80b-463f-8816-288fbb247114] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:32:45.513074   13962 system_pods.go:89] "csi-hostpathplugin-xc57j" [bcca1da2-f2cb-48ec-af9a-d85f95245f78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:32:45.513080   13962 system_pods.go:89] "etcd-addons-715398" [e08d95e4-a4d4-4831-8cd4-364d2dd8f2d0] Running
	I0912 21:32:45.513086   13962 system_pods.go:89] "kube-apiserver-addons-715398" [92589e8d-f163-4a53-841d-dd8cac3b995f] Running
	I0912 21:32:45.513092   13962 system_pods.go:89] "kube-controller-manager-addons-715398" [6317a4c6-6a1f-4970-bbf0-aff9854f0743] Running
	I0912 21:32:45.513097   13962 system_pods.go:89] "kube-ingress-dns-minikube" [371fc3e5-942e-43a4-aa43-19dc288be015] Running
	I0912 21:32:45.513102   13962 system_pods.go:89] "kube-proxy-vl2cm" [372353b3-5f28-436f-ba52-ceec8711abe0] Running
	I0912 21:32:45.513108   13962 system_pods.go:89] "kube-scheduler-addons-715398" [5c396abd-fcb9-48c9-aa1b-5a94dd074ff0] Running
	I0912 21:32:45.513117   13962 system_pods.go:89] "metrics-server-84c5f94fbc-szb47" [23fcc711-0ca1-483e-ba96-3ac1a0ad6262] Running
	I0912 21:32:45.513123   13962 system_pods.go:89] "nvidia-device-plugin-daemonset-b777s" [7331841e-12ab-47ba-bd0a-65f840b6ec72] Running
	I0912 21:32:45.513132   13962 system_pods.go:89] "registry-66c9cd494c-zk84l" [49ab21ec-c966-42ee-8094-c36eab0ac340] Running
	I0912 21:32:45.513137   13962 system_pods.go:89] "registry-proxy-ljcbv" [024b69e7-bbbc-4ada-a1a0-2a1141baeac9] Running
	I0912 21:32:45.513142   13962 system_pods.go:89] "snapshot-controller-56fcc65765-cs8t4" [81cb95d9-d344-47c3-8c54-6a9c03aab2ab] Running
	I0912 21:32:45.513147   13962 system_pods.go:89] "snapshot-controller-56fcc65765-n8fkr" [ceb93783-f135-4817-b6f6-120bbba0afe8] Running
	I0912 21:32:45.513156   13962 system_pods.go:89] "storage-provisioner" [67eeea4d-71b3-4af7-a01f-ecbf663c40f4] Running
	I0912 21:32:45.513165   13962 system_pods.go:89] "tiller-deploy-b48cc5f79-g2nw5" [79597e16-af9b-4833-84f6-c20a8280464f] Running
	I0912 21:32:45.513174   13962 system_pods.go:126] duration metric: took 7.465963ms to wait for k8s-apps to be running ...
	I0912 21:32:45.513184   13962 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:32:45.513234   13962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:32:45.530547   13962 system_svc.go:56] duration metric: took 17.350203ms WaitForService to wait for kubelet
	I0912 21:32:45.530589   13962 kubeadm.go:582] duration metric: took 1m35.077405869s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:32:45.530612   13962 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:32:45.534608   13962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:32:45.534636   13962 node_conditions.go:123] node cpu capacity is 2
	I0912 21:32:45.534648   13962 node_conditions.go:105] duration metric: took 4.031128ms to run NodePressure ...
	I0912 21:32:45.534665   13962 start.go:241] waiting for startup goroutines ...
	I0912 21:32:45.549783   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:45.717435   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:46.051019   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:46.216697   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:46.550409   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:46.716070   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:47.050086   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:47.217156   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:47.550347   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:47.716274   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:48.049839   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:48.216674   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:48.554929   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:48.715760   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:49.050121   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:49.217318   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:49.549483   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:49.717574   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:50.050334   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:50.217472   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:50.550070   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:50.716411   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:51.050071   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:51.216387   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:51.549099   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:51.726541   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:52.050544   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:52.216221   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:52.550688   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:52.751265   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:53.050319   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:53.242096   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:53.549760   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:53.716247   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:54.049451   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:54.216321   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:54.549761   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:54.717090   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:55.052939   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:55.216623   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:55.551251   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:55.715447   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:56.049535   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:56.220673   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:56.549734   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:56.716482   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:57.049008   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:57.217392   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:57.549802   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:57.718025   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:58.049746   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:58.217124   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:58.847674   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:58.849493   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:59.050524   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:59.242053   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:32:59.550050   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:59.716806   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:00.050991   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:00.216360   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:00.550102   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:00.717546   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:01.052882   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:01.217000   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:01.550768   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:01.716783   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:02.051394   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:02.216444   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:02.550133   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:02.718468   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:03.049861   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:03.216328   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:03.549559   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:03.716503   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:04.049513   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:04.215927   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:04.549897   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:04.716483   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:05.051752   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:33:05.216294   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:05.550172   13962 kapi.go:107] duration metric: took 1m42.505320281s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:33:05.716475   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:06.216946   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:06.717280   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:07.216367   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:07.716832   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:08.216681   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:08.716899   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:09.216160   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:09.716805   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:10.270898   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:10.716494   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:11.216702   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:11.716379   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:12.217420   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:12.717218   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:13.216069   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:13.716092   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:14.216647   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:14.716418   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:15.216995   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:15.716430   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:16.215753   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:16.717334   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:17.216942   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:17.716783   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:18.216957   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:18.716787   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:19.217105   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:19.716807   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:20.216636   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:20.716633   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:21.217031   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:21.716813   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:22.216355   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:22.716946   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:23.216745   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:23.716314   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:24.216797   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:24.716808   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:25.217155   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:25.717066   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:26.216071   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:26.717253   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:27.217181   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:27.716850   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:28.216766   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:28.716283   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:29.217410   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:29.716860   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:30.216507   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:30.716013   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:31.216125   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:31.717059   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:32.216820   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:32.717441   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:33.216492   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:33.716525   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:34.216859   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:34.717936   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:35.216505   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:35.716176   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:36.216722   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:36.716796   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:37.217289   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:37.717807   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:38.220535   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:38.716373   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:39.216940   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:39.716540   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:40.216590   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:40.717019   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:41.218536   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:41.721091   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:42.215651   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:42.715773   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:43.216032   13962 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:33:43.717042   13962 kapi.go:107] duration metric: took 2m24.004947146s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:34:08.591348   13962 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:34:08.591370   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:09.090035   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:09.590290   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:10.091258   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:10.591595   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:11.090782   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:11.590424   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:12.090151   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:12.591048   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:13.090891   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:13.590617   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:14.090355   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:14.590104   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:15.090620   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:15.591722   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:16.090442   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:16.590631   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:17.090305   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:17.590701   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:18.091288   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:18.589486   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:19.090788   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:19.590486   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:20.090925   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:20.590549   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:21.090883   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:21.590202   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:22.090028   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:22.590300   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:23.090655   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:23.590094   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:24.090538   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:24.590438   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:25.095777   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:25.594385   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:26.090034   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:26.590503   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:27.090235   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:27.598614   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:28.090264   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:28.589827   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:29.091114   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:29.590638   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:30.089376   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:30.591805   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:31.090016   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:31.590728   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:32.091357   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:32.589817   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:33.090764   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:33.590132   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:34.090571   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:34.590037   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:35.090763   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:35.590094   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:36.089853   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:36.590142   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:37.090827   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:37.590620   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:38.090637   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:38.590486   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:39.090177   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:39.591148   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:40.089688   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:40.590282   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:41.090288   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:41.589716   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:42.090809   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:42.590211   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:43.090229   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:43.590581   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:44.090498   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:44.591862   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:45.090432   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:45.590819   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:46.090089   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:46.591211   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:47.091278   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:47.590824   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:48.090498   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:48.590272   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:49.090043   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:49.590431   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:50.090410   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:50.590403   13962 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:34:51.091179   13962 kapi.go:107] duration metric: took 3m26.50477837s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:34:51.093784   13962 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-715398 cluster.
	I0912 21:34:51.095076   13962 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:34:51.096229   13962 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:34:51.097516   13962 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, inspektor-gadget, volcano, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 21:34:51.098853   13962 addons.go:510] duration metric: took 3m40.645645815s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin helm-tiller inspektor-gadget volcano metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 21:34:51.098893   13962 start.go:246] waiting for cluster config update ...
	I0912 21:34:51.098911   13962 start.go:255] writing updated cluster config ...
	I0912 21:34:51.099144   13962 ssh_runner.go:195] Run: rm -f paused
	I0912 21:34:51.148600   13962 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:34:51.150533   13962 out.go:177] * Done! kubectl is now configured to use "addons-715398" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	524b181a0357c       9056ab77afb8e       33 minutes ago      Running             hello-world-app           0                   25424ab501509       hello-world-app-55bf9c44b4-vcjj9
	014ab2c94f7fd       c7b4f26a7d93f       33 minutes ago      Running             nginx                     0                   8edef152c1c74       nginx
	68a2c857e1aa1       195d612ae7722       34 minutes ago      Running             gadget                    5                   bbe4364125e8a       gadget-45fbz
	b0c74b5d5d4e2       56cc512116c8f       34 minutes ago      Running             busybox                   0                   73bbe2596131d       busybox
	18d920e282681       195d612ae7722       35 minutes ago      Exited              gadget                    4                   bbe4364125e8a       gadget-45fbz
	4968d944826eb       6e38f40d628db       39 minutes ago      Running             storage-provisioner       0                   e0c6688de2a67       storage-provisioner
	197e175f79295       c69fa2e9cbf5f       39 minutes ago      Running             coredns                   0                   71a01dee000e9       coredns-7c65d6cfc9-2kvmx
	dda4cfce3068d       60c005f310ff3       39 minutes ago      Running             kube-proxy                0                   ab347e64da009       kube-proxy-vl2cm
	22df3bb70c0ee       9aa1fad941575       39 minutes ago      Running             kube-scheduler            0                   b20a63c63b9d2       kube-scheduler-addons-715398
	60b9b9bad0c29       175ffd71cce3d       39 minutes ago      Running             kube-controller-manager   0                   9f553abb35f6c       kube-controller-manager-addons-715398
	769df62991c87       2e96e5913fc06       39 minutes ago      Running             etcd                      0                   43fd91421f4e5       etcd-addons-715398
	92461bd7d44ef       6bab7719df100       39 minutes ago      Running             kube-apiserver            0                   a4c0740f66829       kube-apiserver-addons-715398
	
	
	==> containerd <==
	Sep 12 22:09:42 addons-715398 containerd[650]: time="2024-09-12T22:09:42.845637979Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"5ba1bf51510eec758de5a100304e4e392a8f01c437facc230637b0170e411a22\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:47 addons-715398 containerd[650]: time="2024-09-12T22:09:47.820902922Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"9ef99ce5edcb01077a8caf576d20ced21b4a899f2bff919afe484c4866d8376f\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:47 addons-715398 containerd[650]: time="2024-09-12T22:09:47.835204674Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"4c7aa5d4c0743652c2042c6aa62348ef66de3220d0411bc8c66ae05eba2027dd\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:47 addons-715398 containerd[650]: time="2024-09-12T22:09:47.847571326Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"936806bc56b9afe415c1a6f7c66a6c4e7fba3d429c6d9e51e4166730be7b2546\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:52 addons-715398 containerd[650]: time="2024-09-12T22:09:52.817348639Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"65b4a9b34198fdd84f8f198eb89564ac91d7fe59d2e51c726659c48749c13215\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:52 addons-715398 containerd[650]: time="2024-09-12T22:09:52.835510545Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"7bea1bebc564529a6daa1480144778b170e2f8404fd34c1a3fe1fb60c4a6d548\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:52 addons-715398 containerd[650]: time="2024-09-12T22:09:52.853238895Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"a4f0bdcc183e0dd5374c32efc1f8f4a29b003c00eeea61e2c0c5dd690779a98d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:57 addons-715398 containerd[650]: time="2024-09-12T22:09:57.820048301Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"c99ac24a0bb6608b69b60187288a8753338616d163b597be992b6eb7f6f0e438\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:57 addons-715398 containerd[650]: time="2024-09-12T22:09:57.834347617Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"c06256e5879230c95040e98190ec52d2236605569c890d26b36a2d75895df35b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:09:57 addons-715398 containerd[650]: time="2024-09-12T22:09:57.847148412Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"4aa76b18c3ecfe3f8004fad5bb5500495d857097827a45454316804a510ec786\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:02 addons-715398 containerd[650]: time="2024-09-12T22:10:02.818229543Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"77929e7b28a2af8deff29fd6103379865544078df5c1216a897c7b0acc342770\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:02 addons-715398 containerd[650]: time="2024-09-12T22:10:02.840943552Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"0326764a3d20cb05021cb090391b9432349a9b82524b193864e2c94ffe191eb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:02 addons-715398 containerd[650]: time="2024-09-12T22:10:02.857751485Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"c2792b14b0f984ec3fc6b915c751df35f515c3dd9655215a26bc9655fc01ffc8\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:07 addons-715398 containerd[650]: time="2024-09-12T22:10:07.827093979Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"034ab0fb5592c0a1d05fd0792e14ef47fddacc36ad47e1bb52729ae15ba34fbf\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:07 addons-715398 containerd[650]: time="2024-09-12T22:10:07.840139267Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"c8860b19af947a3faa457208300b345678f96187199ef8f2e1245dca18d85698\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:07 addons-715398 containerd[650]: time="2024-09-12T22:10:07.851692433Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"712fa0f2fcf6c490a7a05bd46618fc75fcbc780abe5b4cda05bc5cba273aa879\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:12 addons-715398 containerd[650]: time="2024-09-12T22:10:12.818117954Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"d486ccc48f1f9fa91dff933c933f5d1429887ce9a22ee8701c40bffb66e7a5b7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:12 addons-715398 containerd[650]: time="2024-09-12T22:10:12.844020763Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"7b59c173ab41e59cde45a24d615b5e447136343538b189486506b9540264dc98\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:12 addons-715398 containerd[650]: time="2024-09-12T22:10:12.858018122Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"450464a2214578f5ef10a831f2c1e33b669f28ed20cf38ed69ee4490b0443132\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:17 addons-715398 containerd[650]: time="2024-09-12T22:10:17.822101570Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"7e6b0a044a92e9a2c74afad38bd6da01971527ebb95e5355e966eb01986953cb\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:17 addons-715398 containerd[650]: time="2024-09-12T22:10:17.838973423Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"0170edcf76c34caab7625cc794897f81899b1ead44e295d7c98d6bc00b91765b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:17 addons-715398 containerd[650]: time="2024-09-12T22:10:17.852527475Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"83510ba536f112a590c64c586e41d31e5c3aadfa4e5d6e3d5c29f2104bd5b7ee\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:22 addons-715398 containerd[650]: time="2024-09-12T22:10:22.820109908Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"f5a49a6e355c601b4020d7578e61c73aab998298ea698d4689b9b716e8825a01\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:22 addons-715398 containerd[650]: time="2024-09-12T22:10:22.836817498Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"73fd684dc7d5cf3f83ea3b8cc02cf6f579c6aa6aa677be4445df1b3a8e0fed1d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:10:22 addons-715398 containerd[650]: time="2024-09-12T22:10:22.851567391Z" level=error msg="ExecSync for \"68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58\" failed" error="failed to exec in container: failed to start exec \"1ea34e1c6ccd31e6c75496fc34831ec8ed05edfeb5fc6b437d6906d0c8a5ec4a\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	
	
	==> coredns [197e175f79295925b9bbbc33dbaddebc786af4227f005fc1e2b251d9c5d9a35d] <==
	[INFO] 10.244.0.23:53322 - 14748 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131163s
	[INFO] 10.244.0.23:53322 - 8619 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138717s
	[INFO] 10.244.0.23:53322 - 56785 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000105073s
	[INFO] 10.244.0.23:53322 - 49355 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000130635s
	[INFO] 10.244.0.23:45324 - 20064 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080825s
	[INFO] 10.244.0.23:45324 - 17447 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092296s
	[INFO] 10.244.0.23:45324 - 39756 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044379s
	[INFO] 10.244.0.23:45324 - 32709 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004711s
	[INFO] 10.244.0.23:45324 - 1894 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082489s
	[INFO] 10.244.0.23:45324 - 41912 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084837s
	[INFO] 10.244.0.23:45324 - 34754 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000093141s
	[INFO] 10.244.0.23:39866 - 24817 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112791s
	[INFO] 10.244.0.23:39866 - 9095 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099302s
	[INFO] 10.244.0.23:50551 - 35275 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056472s
	[INFO] 10.244.0.23:39866 - 58870 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056607s
	[INFO] 10.244.0.23:39866 - 54002 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000136662s
	[INFO] 10.244.0.23:39866 - 30320 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000576245s
	[INFO] 10.244.0.23:50551 - 35972 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083454s
	[INFO] 10.244.0.23:50551 - 19466 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008922s
	[INFO] 10.244.0.23:50551 - 11300 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058128s
	[INFO] 10.244.0.23:39866 - 50162 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041603s
	[INFO] 10.244.0.23:39866 - 46341 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045499s
	[INFO] 10.244.0.23:50551 - 42918 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029143s
	[INFO] 10.244.0.23:50551 - 35475 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061706s
	[INFO] 10.244.0.23:50551 - 49338 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054953s
	
	
	==> describe nodes <==
	Name:               addons-715398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-715398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-715398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_31_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-715398
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:31:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-715398
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:10:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:08:20 +0000   Thu, 12 Sep 2024 21:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:08:20 +0000   Thu, 12 Sep 2024 21:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:08:20 +0000   Thu, 12 Sep 2024 21:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:08:20 +0000   Thu, 12 Sep 2024 21:31:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    addons-715398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6ae25679ea541f3bb3efe56101d1210
	  System UUID:                b6ae2567-9ea5-41f3-bb3e-fe56101d1210
	  Boot ID:                    47be5efc-6a1b-4dc9-916b-4dce7688c7eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m
	  default                     hello-world-app-55bf9c44b4-vcjj9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  gadget                      gadget-45fbz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m
	  kube-system                 coredns-7c65d6cfc9-2kvmx                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     39m
	  kube-system                 etcd-addons-715398                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         39m
	  kube-system                 kube-apiserver-addons-715398             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39m
	  kube-system                 kube-controller-manager-addons-715398    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39m
	  kube-system                 kube-proxy-vl2cm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m
	  kube-system                 kube-scheduler-addons-715398             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39m                kube-proxy       
	  Normal  Starting                 39m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39m (x8 over 39m)  kubelet          Node addons-715398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m (x8 over 39m)  kubelet          Node addons-715398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m (x7 over 39m)  kubelet          Node addons-715398 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 39m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  39m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39m                kubelet          Node addons-715398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m                kubelet          Node addons-715398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m                kubelet          Node addons-715398 status is now: NodeHasSufficientPID
	  Normal  NodeReady                39m                kubelet          Node addons-715398 status is now: NodeReady
	  Normal  RegisteredNode           39m                node-controller  Node addons-715398 event: Registered Node addons-715398 in Controller
	
	
	==> dmesg <==
	[  +5.752619] kauditd_printk_skb: 20 callbacks suppressed
	[Sep12 21:33] kauditd_printk_skb: 48 callbacks suppressed
	[ +12.843214] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.948857] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.368835] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.659881] kauditd_printk_skb: 37 callbacks suppressed
	[Sep12 21:34] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.153860] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.482030] kauditd_printk_skb: 2 callbacks suppressed
	[Sep12 21:35] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.192923] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.640099] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.962492] kauditd_printk_skb: 10 callbacks suppressed
	[Sep12 21:36] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.008522] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.038274] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.482455] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.080625] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.753398] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.354117] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.383625] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.583574] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.840406] kauditd_printk_skb: 45 callbacks suppressed
	[Sep12 21:37] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.325754] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [769df62991c87c543bdf7c780c32f9bb469a01e7ade8b4ed7806f657ed762fb6] <==
	{"level":"warn","ts":"2024-09-12T21:35:24.460050Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.352979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/commands.bus.volcano.sh\" ","response":"range_response_count:1 size:7928"}
	{"level":"info","ts":"2024-09-12T21:35:24.460112Z","caller":"traceutil/trace.go:171","msg":"trace[1279832919] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/commands.bus.volcano.sh; range_end:; response_count:1; response_revision:1709; }","duration":"251.551006ms","start":"2024-09-12T21:35:24.208549Z","end":"2024-09-12T21:35:24.460100Z","steps":["trace[1279832919] 'range keys from in-memory index tree'  (duration: 246.182159ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:35:24.482565Z","caller":"traceutil/trace.go:171","msg":"trace[189973185] transaction","detail":"{read_only:false; response_revision:1710; number_of_response:1; }","duration":"125.997669ms","start":"2024-09-12T21:35:24.356547Z","end":"2024-09-12T21:35:24.482544Z","steps":["trace[189973185] 'process raft request'  (duration: 92.272111ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:36:40.370346Z","caller":"traceutil/trace.go:171","msg":"trace[283169378] transaction","detail":"{read_only:false; response_revision:2308; number_of_response:1; }","duration":"141.271604ms","start":"2024-09-12T21:36:40.229040Z","end":"2024-09-12T21:36:40.370312Z","steps":["trace[283169378] 'process raft request'  (duration: 140.711611ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:36:40.373426Z","caller":"traceutil/trace.go:171","msg":"trace[1141496537] linearizableReadLoop","detail":"{readStateIndex:2408; appliedIndex:2407; }","duration":"126.378301ms","start":"2024-09-12T21:36:40.244572Z","end":"2024-09-12T21:36:40.370950Z","steps":["trace[1141496537] 'read index received'  (duration: 125.13513ms)","trace[1141496537] 'applied index is now lower than readState.Index'  (duration: 1.242282ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:36:40.373586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.969049ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:36:40.373617Z","caller":"traceutil/trace.go:171","msg":"trace[918217491] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2308; }","duration":"129.041631ms","start":"2024-09-12T21:36:40.244566Z","end":"2024-09-12T21:36:40.373607Z","steps":["trace[918217491] 'agreement among raft nodes before linearized reading'  (duration: 128.948467ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:41:01.288208Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1961}
	{"level":"info","ts":"2024-09-12T21:41:01.388569Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1961,"took":"98.543918ms","hash":2840322955,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":7294976,"current-db-size-in-use":"7.3 MB"}
	{"level":"info","ts":"2024-09-12T21:41:01.388916Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2840322955,"revision":1961,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T21:46:01.295408Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2828}
	{"level":"info","ts":"2024-09-12T21:46:01.321521Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2828,"took":"24.902944ms","hash":1892258058,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":3362816,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-12T21:46:01.321663Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1892258058,"revision":2828,"compact-revision":1961}
	{"level":"info","ts":"2024-09-12T21:51:01.303180Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3072}
	{"level":"info","ts":"2024-09-12T21:51:01.308007Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3072,"took":"3.939804ms","hash":998312940,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":1974272,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-09-12T21:51:01.308113Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":998312940,"revision":3072,"compact-revision":2828}
	{"level":"info","ts":"2024-09-12T21:56:01.310453Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3316}
	{"level":"info","ts":"2024-09-12T21:56:01.315704Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3316,"took":"4.220871ms","hash":4170874452,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":1982464,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-09-12T21:56:01.316036Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4170874452,"revision":3316,"compact-revision":3072}
	{"level":"info","ts":"2024-09-12T22:01:01.318562Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3560}
	{"level":"info","ts":"2024-09-12T22:01:01.324061Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3560,"took":"4.695841ms","hash":3449617953,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":1986560,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-09-12T22:01:01.324160Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3449617953,"revision":3560,"compact-revision":3316}
	{"level":"info","ts":"2024-09-12T22:06:01.326393Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3804}
	{"level":"info","ts":"2024-09-12T22:06:01.331095Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3804,"took":"3.905485ms","hash":2872754431,"current-db-size-bytes":11223040,"current-db-size":"11 MB","current-db-size-in-use-bytes":1978368,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-09-12T22:06:01.331255Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2872754431,"revision":3804,"compact-revision":3560}
	
	
	==> kernel <==
	 22:10:25 up 39 min,  0 users,  load average: 0.06, 0.16, 0.24
	Linux addons-715398 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [92461bd7d44efcf69671f444da454821160f230f28ae9f2897a736ce43a3c6d1] <==
	E0912 21:35:46.432352       1 conn.go:339] Error on socket receive: read tcp 192.168.39.77:8443->192.168.39.1:46078: use of closed network connection
	E0912 21:35:46.618636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.77:8443->192.168.39.1:46098: use of closed network connection
	E0912 21:36:23.827402       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.77:8443->10.244.0.34:39334: read: connection reset by peer
	I0912 21:36:23.969535       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0912 21:36:25.276032       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0912 21:36:26.663455       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:36:27.708778       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:36:32.377167       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.199.202"}
	I0912 21:36:52.981373       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:36:53.179759       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.7.213"}
	I0912 21:36:58.813596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:36:58.813653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:36:58.845505       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:36:58.845562       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:36:58.850597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:36:58.850981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:36:58.866156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:36:58.866210       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:36:58.913223       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:36:58.913285       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:36:59.851140       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:36:59.913801       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 21:37:00.021232       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0912 21:37:05.662025       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.103.75"}
	I0912 21:37:13.135128       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [60b9b9bad0c29338f7e4e905491534a3a295b417797350f89020dccbaa61ae42] <==
	W0912 22:09:35.240327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:35.240380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:09:39.185247       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:39.185445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:09:40.534162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:40.534373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:09:44.007477       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:44.007744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:09:49.992428       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:49.992491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:09:53.259455       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:09:53.259515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:10:01.153986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:01.154026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:10:04.472001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:04.472348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:10:10.101517       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:10.101574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0912 22:10:14.961390       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace gadget failed: unexpected items still remain in namespace: gadget for gvr: /v1, Resource=pods" logger="UnhandledError"
	W0912 22:10:24.542353       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:24.542411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:10:25.805699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:25.805775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:10:25.908644       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:10:25.908690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [dda4cfce3068d51f23113775e66dddf2bc6ca9281b25d8fe86328955cc93e149] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:31:11.561307       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:31:11.661947       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0912 21:31:11.662064       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:31:12.042979       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:31:12.043053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:31:12.043078       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:31:12.046218       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:31:12.046467       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:31:12.046501       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:31:12.047958       1 config.go:199] "Starting service config controller"
	I0912 21:31:12.048002       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:31:12.048035       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:31:12.048077       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:31:12.053155       1 config.go:328] "Starting node config controller"
	I0912 21:31:12.053185       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:31:12.148804       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:31:12.148819       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:31:12.153974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22df3bb70c0ee3a9f3270459871d14990043a6a37af37f3ea6269381b74e8146] <==
	E0912 21:31:02.707328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0912 21:31:02.707356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0912 21:31:02.707443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0912 21:31:02.707626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:02.687287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0912 21:31:02.687298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 21:31:02.705303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:31:02.734192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0912 21:31:02.734242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0912 21:31:02.734430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.557068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:31:03.557121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.570120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:31:03.570268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.778991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:31:03.779522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.799163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:31:03.799289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.873576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:31:03.873959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:03.947144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:31:03.947277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:31:04.122264       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:31:04.122457       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0912 21:31:06.752624       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 22:09:52 addons-715398 kubelet[1192]: E0912 22:09:52.853550    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"a4f0bdcc183e0dd5374c32efc1f8f4a29b003c00eeea61e2c0c5dd690779a98d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:09:57 addons-715398 kubelet[1192]: E0912 22:09:57.820549    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c99ac24a0bb6608b69b60187288a8753338616d163b597be992b6eb7f6f0e438\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:09:57 addons-715398 kubelet[1192]: E0912 22:09:57.834589    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c06256e5879230c95040e98190ec52d2236605569c890d26b36a2d75895df35b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:09:57 addons-715398 kubelet[1192]: E0912 22:09:57.847590    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"4aa76b18c3ecfe3f8004fad5bb5500495d857097827a45454316804a510ec786\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:02 addons-715398 kubelet[1192]: E0912 22:10:02.818959    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"77929e7b28a2af8deff29fd6103379865544078df5c1216a897c7b0acc342770\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:02 addons-715398 kubelet[1192]: E0912 22:10:02.841429    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"0326764a3d20cb05021cb090391b9432349a9b82524b193864e2c94ffe191eb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:02 addons-715398 kubelet[1192]: E0912 22:10:02.858113    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c2792b14b0f984ec3fc6b915c751df35f515c3dd9655215a26bc9655fc01ffc8\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:05 addons-715398 kubelet[1192]: E0912 22:10:05.500098    1192 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:10:05 addons-715398 kubelet[1192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:10:05 addons-715398 kubelet[1192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:10:05 addons-715398 kubelet[1192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:10:05 addons-715398 kubelet[1192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:10:07 addons-715398 kubelet[1192]: E0912 22:10:07.827554    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"034ab0fb5592c0a1d05fd0792e14ef47fddacc36ad47e1bb52729ae15ba34fbf\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:07 addons-715398 kubelet[1192]: E0912 22:10:07.840413    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c8860b19af947a3faa457208300b345678f96187199ef8f2e1245dca18d85698\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:07 addons-715398 kubelet[1192]: E0912 22:10:07.851977    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"712fa0f2fcf6c490a7a05bd46618fc75fcbc780abe5b4cda05bc5cba273aa879\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:12 addons-715398 kubelet[1192]: E0912 22:10:12.818692    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d486ccc48f1f9fa91dff933c933f5d1429887ce9a22ee8701c40bffb66e7a5b7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:12 addons-715398 kubelet[1192]: E0912 22:10:12.844225    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"7b59c173ab41e59cde45a24d615b5e447136343538b189486506b9540264dc98\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:12 addons-715398 kubelet[1192]: E0912 22:10:12.858196    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"450464a2214578f5ef10a831f2c1e33b669f28ed20cf38ed69ee4490b0443132\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:13 addons-715398 kubelet[1192]: I0912 22:10:13.466476    1192 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 22:10:17 addons-715398 kubelet[1192]: E0912 22:10:17.823171    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"7e6b0a044a92e9a2c74afad38bd6da01971527ebb95e5355e966eb01986953cb\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:17 addons-715398 kubelet[1192]: E0912 22:10:17.839409    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"0170edcf76c34caab7625cc794897f81899b1ead44e295d7c98d6bc00b91765b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:17 addons-715398 kubelet[1192]: E0912 22:10:17.852828    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"83510ba536f112a590c64c586e41d31e5c3aadfa4e5d6e3d5c29f2104bd5b7ee\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:22 addons-715398 kubelet[1192]: E0912 22:10:22.820831    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"f5a49a6e355c601b4020d7578e61c73aab998298ea698d4689b9b716e8825a01\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:22 addons-715398 kubelet[1192]: E0912 22:10:22.837292    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"73fd684dc7d5cf3f83ea3b8cc02cf6f579c6aa6aa677be4445df1b3a8e0fed1d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:10:22 addons-715398 kubelet[1192]: E0912 22:10:22.852126    1192 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"1ea34e1c6ccd31e6c75496fc34831ec8ed05edfeb5fc6b437d6906d0c8a5ec4a\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="68a2c857e1aa1541d1fbd4b07c84d65bf9d8802a5cb75b6e1710de471b850d58" cmd=["/bin/gadgettracermanager","-liveness"]
	
	
	==> storage-provisioner [4968d944826eb2c90746765f60d23c5f5c2c551495cbf85abc0c957acc6cbb93] <==
	I0912 21:31:18.379390       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:31:18.458201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:31:18.458358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:31:18.484956       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:31:18.485201       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-715398_dae9c4cf-9860-4b25-93a0-68f25670a6d0!
	I0912 21:31:18.486207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea0b56e5-8694-48cb-bd18-55a148d5125e", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-715398_dae9c4cf-9860-4b25-93a0-68f25670a6d0 became leader
	I0912 21:31:18.586506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-715398_dae9c4cf-9860-4b25-93a0-68f25670a6d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-715398 -n addons-715398
helpers_test.go:261: (dbg) Run:  kubectl --context addons-715398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (2046.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (0s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-715398
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-715398: context deadline exceeded (1.894µs)
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-715398" : context deadline exceeded
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715398
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-715398: context deadline exceeded (362ns)
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-715398" : context deadline exceeded
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715398
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-715398: context deadline exceeded (302ns)
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-715398" : context deadline exceeded
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-715398
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-715398: context deadline exceeded (237ns)
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-715398" : context deadline exceeded
--- FAIL: TestAddons/StoppedEnableDisable (0.00s)

                                                
                                    

Test pass (288/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 21.99
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 84.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 266.31
29 TestAddons/serial/Volcano 42.88
31 TestAddons/serial/GCPAuth/Namespaces 0.11
33 TestAddons/parallel/Registry 17.78
34 TestAddons/parallel/Ingress 22.45
36 TestAddons/parallel/MetricsServer 6.85
37 TestAddons/parallel/HelmTiller 12.63
39 TestAddons/parallel/CSI 63.73
40 TestAddons/parallel/Headlamp 21.17
41 TestAddons/parallel/CloudSpanner 5.54
42 TestAddons/parallel/LocalPath 57.34
43 TestAddons/parallel/NvidiaDevicePlugin 5.89
44 TestAddons/parallel/Yakd 11.94
46 TestCertOptions 65.7
47 TestCertExpiration 259.85
49 TestForceSystemdFlag 84.48
50 TestForceSystemdEnv 107.45
52 TestKVMDriverInstallOrUpdate 8.69
56 TestErrorSpam/setup 44.39
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.49
60 TestErrorSpam/unpause 1.68
61 TestErrorSpam/stop 4.04
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 83.36
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 42.12
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
73 TestFunctional/serial/CacheCmd/cache/add_local 2.73
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 52.21
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.35
84 TestFunctional/serial/LogsFileCmd 1.35
85 TestFunctional/serial/InvalidService 5.5
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 20.07
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.83
95 TestFunctional/parallel/ServiceCmdConnect 10.75
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 47.23
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.35
101 TestFunctional/parallel/MySQL 27.64
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.22
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
111 TestFunctional/parallel/License 0.85
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.52
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.66
119 TestFunctional/parallel/ImageCommands/Setup 2.7
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.87
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
140 TestFunctional/parallel/ProfileCmd/profile_list 0.29
141 TestFunctional/parallel/MountCmd/any-port 19.7
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
143 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
144 TestFunctional/parallel/ServiceCmd/List 0.46
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
147 TestFunctional/parallel/ServiceCmd/Format 0.28
148 TestFunctional/parallel/ServiceCmd/URL 0.3
149 TestFunctional/parallel/MountCmd/specific-port 1.75
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.09
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 252.22
158 TestMultiControlPlane/serial/DeployApp 7.77
159 TestMultiControlPlane/serial/PingHostFromPods 1.17
160 TestMultiControlPlane/serial/AddWorkerNode 58.36
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.29
164 TestMultiControlPlane/serial/StopSecondaryNode 92.14
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
166 TestMultiControlPlane/serial/RestartSecondaryNode 40.57
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.52
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 445.83
169 TestMultiControlPlane/serial/DeleteSecondaryNode 6.73
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
171 TestMultiControlPlane/serial/StopCluster 274.59
172 TestMultiControlPlane/serial/RestartCluster 156.5
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 79.79
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 56.22
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.58
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 88.69
211 TestMountStart/serial/StartWithMountFirst 28.83
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 30.01
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 23.8
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 113.06
223 TestMultiNode/serial/DeployApp2Nodes 7.1
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 53.63
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7
229 TestMultiNode/serial/StopNode 2.13
230 TestMultiNode/serial/StartAfterStop 33.83
231 TestMultiNode/serial/RestartKeepsNodes 330.08
232 TestMultiNode/serial/DeleteNode 1.94
233 TestMultiNode/serial/StopMultiNode 183.12
234 TestMultiNode/serial/RestartMultiNode 106.14
235 TestMultiNode/serial/ValidateNameConflict 44.15
240 TestPreload 284.51
242 TestScheduledStopUnix 119.73
246 TestRunningBinaryUpgrade 200.54
248 TestKubernetesUpgrade 208.88
257 TestPause/serial/Start 84.64
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 98.04
269 TestNetworkPlugins/group/false 2.81
273 TestPause/serial/SecondStartNoReconfiguration 88.5
274 TestNoKubernetes/serial/StartWithStopK8s 54.69
275 TestStoppedBinaryUpgrade/Setup 3.56
276 TestStoppedBinaryUpgrade/Upgrade 199.08
277 TestNoKubernetes/serial/Start 48.84
278 TestPause/serial/Pause 0.73
279 TestPause/serial/VerifyStatus 0.24
280 TestPause/serial/Unpause 0.63
281 TestPause/serial/PauseAgain 0.88
282 TestPause/serial/DeletePaused 1.16
283 TestPause/serial/VerifyDeletedResources 0.27
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
285 TestNoKubernetes/serial/ProfileList 0.83
286 TestNoKubernetes/serial/Stop 1.62
287 TestNoKubernetes/serial/StartNoArgs 61.17
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
291 TestStartStop/group/old-k8s-version/serial/FirstStart 185.86
293 TestStartStop/group/embed-certs/serial/FirstStart 77.63
295 TestStartStop/group/no-preload/serial/FirstStart 127.89
296 TestStartStop/group/embed-certs/serial/DeployApp 11.34
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
298 TestStartStop/group/embed-certs/serial/Stop 91.76
299 TestStartStop/group/no-preload/serial/DeployApp 12.29
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
301 TestStartStop/group/no-preload/serial/Stop 92.17
302 TestStartStop/group/old-k8s-version/serial/DeployApp 11.41
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/embed-certs/serial/SecondStart 296.33
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
306 TestStartStop/group/old-k8s-version/serial/Stop 92.03
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.65
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
310 TestStartStop/group/no-preload/serial/SecondStart 324.23
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/old-k8s-version/serial/SecondStart 163.02
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.31
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.75
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.44
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
321 TestStartStop/group/old-k8s-version/serial/Pause 2.73
323 TestStartStop/group/newest-cni/serial/FirstStart 48.74
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
327 TestStartStop/group/embed-certs/serial/Pause 3.17
328 TestNetworkPlugins/group/auto/Start 86.75
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
331 TestStartStop/group/newest-cni/serial/Stop 91.73
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
334 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
335 TestStartStop/group/no-preload/serial/Pause 2.45
336 TestNetworkPlugins/group/auto/KubeletFlags 0.74
337 TestNetworkPlugins/group/auto/NetCatPod 10.25
338 TestNetworkPlugins/group/kindnet/Start 63.01
339 TestNetworkPlugins/group/auto/DNS 0.16
340 TestNetworkPlugins/group/auto/Localhost 0.15
341 TestNetworkPlugins/group/auto/HairPin 0.14
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/newest-cni/serial/SecondStart 48.55
344 TestNetworkPlugins/group/calico/Start 98.14
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
348 TestStartStop/group/newest-cni/serial/Pause 2.41
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/custom-flannel/Start 81.73
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
353 TestNetworkPlugins/group/kindnet/DNS 0.19
354 TestNetworkPlugins/group/kindnet/Localhost 0.13
355 TestNetworkPlugins/group/kindnet/HairPin 0.16
356 TestNetworkPlugins/group/enable-default-cni/Start 95.74
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.23
359 TestNetworkPlugins/group/calico/NetCatPod 13.28
360 TestNetworkPlugins/group/calico/DNS 0.22
361 TestNetworkPlugins/group/calico/Localhost 0.15
362 TestNetworkPlugins/group/calico/HairPin 0.14
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
366 TestNetworkPlugins/group/custom-flannel/DNS 0.18
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
370 TestNetworkPlugins/group/flannel/Start 73.56
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
373 TestNetworkPlugins/group/bridge/Start 105.68
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
381 TestNetworkPlugins/group/flannel/NetCatPod 10.21
382 TestNetworkPlugins/group/flannel/DNS 0.15
383 TestNetworkPlugins/group/flannel/Localhost 0.14
384 TestNetworkPlugins/group/flannel/HairPin 0.12
385 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
386 TestNetworkPlugins/group/bridge/NetCatPod 9.21
387 TestNetworkPlugins/group/bridge/DNS 0.15
388 TestNetworkPlugins/group/bridge/Localhost 0.11
389 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (42.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-713556 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-713556 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (42.363955863s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (42.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-713556
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-713556: exit status 85 (57.505263ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-713556 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |          |
	|         | -p download-only-713556        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:19.053258   13180 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:19.053531   13180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:19.053541   13180 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:19.053547   13180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:19.053766   13180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	W0912 21:29:19.053902   13180 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-5898/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-5898/.minikube/config/config.json: no such file or directory
	I0912 21:29:19.054465   13180 out.go:352] Setting JSON to true
	I0912 21:29:19.055353   13180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":700,"bootTime":1726175859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:19.055406   13180 start.go:139] virtualization: kvm guest
	I0912 21:29:19.057856   13180 out.go:97] [download-only-713556] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:29:19.057961   13180 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:29:19.057996   13180 notify.go:220] Checking for updates...
	I0912 21:29:19.059319   13180 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:29:19.060619   13180 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:19.061967   13180 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 21:29:19.063333   13180 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 21:29:19.064694   13180 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:29:19.067326   13180 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:29:19.067596   13180 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:19.167928   13180 out.go:97] Using the kvm2 driver based on user configuration
	I0912 21:29:19.167957   13180 start.go:297] selected driver: kvm2
	I0912 21:29:19.167971   13180 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:29:19.168392   13180 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:19.168541   13180 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:29:19.183357   13180 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:29:19.183401   13180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:19.183849   13180 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 21:29:19.183988   13180 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:29:19.184045   13180 cni.go:84] Creating CNI manager for ""
	I0912 21:29:19.184055   13180 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 21:29:19.184065   13180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:19.184113   13180 start.go:340] cluster config:
	{Name:download-only-713556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-713556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:19.184273   13180 iso.go:125] acquiring lock: {Name:mkb0c1e04979058aa1830bb4b8c465592b866cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:19.186289   13180 out.go:97] Downloading VM boot image ...
	I0912 21:29:19.186327   13180 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19616-5898/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:29:37.969777   13180 out.go:97] Starting "download-only-713556" primary control-plane node in "download-only-713556" cluster
	I0912 21:29:37.969807   13180 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 21:29:38.124172   13180 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0912 21:29:38.124229   13180 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:38.124408   13180 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 21:29:38.126829   13180 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 21:29:38.126849   13180 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0912 21:29:38.287299   13180 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-713556 host does not exist
	  To start a cluster, run: "minikube start -p download-only-713556"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-713556
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (21.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-530122 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-530122 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (21.985309512s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (21.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-530122
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-530122: exit status 85 (56.249569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-713556 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | -p download-only-713556        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC | 12 Sep 24 21:30 UTC |
	| delete  | -p download-only-713556        | download-only-713556 | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC | 12 Sep 24 21:30 UTC |
	| start   | -o=json --download-only        | download-only-530122 | jenkins | v1.34.0 | 12 Sep 24 21:30 UTC |                     |
	|         | -p download-only-530122        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:30:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:30:01.728429   13485 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:30:01.728559   13485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:30:01.728568   13485 out.go:358] Setting ErrFile to fd 2...
	I0912 21:30:01.728572   13485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:30:01.728781   13485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 21:30:01.729405   13485 out.go:352] Setting JSON to true
	I0912 21:30:01.730312   13485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":743,"bootTime":1726175859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:30:01.730371   13485 start.go:139] virtualization: kvm guest
	I0912 21:30:01.732622   13485 out.go:97] [download-only-530122] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:30:01.732788   13485 notify.go:220] Checking for updates...
	I0912 21:30:01.734390   13485 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:30:01.735933   13485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:30:01.737420   13485 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 21:30:01.738875   13485 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 21:30:01.740277   13485 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:30:01.742812   13485 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:30:01.743032   13485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:30:01.776752   13485 out.go:97] Using the kvm2 driver based on user configuration
	I0912 21:30:01.776786   13485 start.go:297] selected driver: kvm2
	I0912 21:30:01.776800   13485 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:30:01.777152   13485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:30:01.777237   13485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5898/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:30:01.792768   13485 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:30:01.792812   13485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:30:01.793239   13485 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 21:30:01.793376   13485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:30:01.793432   13485 cni.go:84] Creating CNI manager for ""
	I0912 21:30:01.793441   13485 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 21:30:01.793452   13485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:30:01.793500   13485 start.go:340] cluster config:
	{Name:download-only-530122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-530122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:01.793589   13485 iso.go:125] acquiring lock: {Name:mkb0c1e04979058aa1830bb4b8c465592b866cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:30:01.795352   13485 out.go:97] Starting "download-only-530122" primary control-plane node in "download-only-530122" cluster
	I0912 21:30:01.795378   13485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 21:30:02.544018   13485 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0912 21:30:02.544062   13485 cache.go:56] Caching tarball of preloaded images
	I0912 21:30:02.544243   13485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 21:30:02.546130   13485 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0912 21:30:02.546166   13485 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 ...
	I0912 21:30:02.705829   13485 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:6356ceed7fe748d0ea8e34a3342d6f3c -> /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0912 21:30:15.513405   13485 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 ...
	I0912 21:30:15.513503   13485 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-5898/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 ...
	I0912 21:30:16.244887   13485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0912 21:30:16.245207   13485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/download-only-530122/config.json ...
	I0912 21:30:16.245233   13485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/download-only-530122/config.json: {Name:mkd695bcc90c543b45422ae02d7ed0191e7a2302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:16.245384   13485 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 21:30:16.245506   13485 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19616-5898/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-530122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-530122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-530122
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-019201 --alsologtostderr --binary-mirror http://127.0.0.1:33319 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-019201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-019201
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (84.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-312938 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-312938 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.647382004s)
helpers_test.go:175: Cleaning up "offline-containerd-312938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-312938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-312938: (1.622770022s)
--- PASS: TestOffline (84.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715398
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-715398: exit status 85 (51.405985ms)

                                                
                                                
-- stdout --
	* Profile "addons-715398" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715398"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715398
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-715398: exit status 85 (51.871807ms)

                                                
                                                
-- stdout --
	* Profile "addons-715398" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715398"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (266.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-715398 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-715398 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m26.306951857s)
--- PASS: TestAddons/Setup (266.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 27.89332ms
addons_test.go:897: volcano-scheduler stabilized in 28.027453ms
addons_test.go:905: volcano-admission stabilized in 28.190697ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-jwhgd" [e2d13d0e-7150-4059-ace3-953d9291bdd8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003604739s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-mfk47" [073da2f1-246a-4e1b-9202-632ac80de08e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003917739s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hw9gr" [2ad2aa29-545e-4f28-914e-367325269bdc] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00391677s
addons_test.go:932: (dbg) Run:  kubectl --context addons-715398 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-715398 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-715398 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5afd674b-ed9b-4895-95eb-fbaffaa70ba5] Pending
helpers_test.go:344: "test-job-nginx-0" [5afd674b-ed9b-4895-95eb-fbaffaa70ba5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5afd674b-ed9b-4895-95eb-fbaffaa70ba5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004794673s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable volcano --alsologtostderr -v=1: (10.469187575s)
--- PASS: TestAddons/serial/Volcano (42.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-715398 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-715398 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.572043ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zk84l" [49ab21ec-c966-42ee-8094-c36eab0ac340] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00332972s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ljcbv" [024b69e7-bbbc-4ada-a1a0-2a1141baeac9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004238313s
addons_test.go:342: (dbg) Run:  kubectl --context addons-715398 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-715398 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-715398 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.742549297s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 ip
2024/09/12 21:36:12 [DEBUG] GET http://192.168.39.77:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-715398 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-715398 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-715398 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bf4bfb55-a6ba-48f7-b8f8-2076bea3ebad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bf4bfb55-a6ba-48f7-b8f8-2076bea3ebad] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004375149s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-715398 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.77
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable ingress-dns --alsologtostderr -v=1: (1.510208683s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable ingress --alsologtostderr -v=1: (7.72272473s)
--- PASS: TestAddons/parallel/Ingress (22.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.729302ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-szb47" [23fcc711-0ca1-483e-ba96-3ac1a0ad6262] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005237534s
addons_test.go:417: (dbg) Run:  kubectl --context addons-715398 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.627613ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-g2nw5" [79597e16-af9b-4833-84f6-c20a8280464f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004321113s
addons_test.go:475: (dbg) Run:  kubectl --context addons-715398 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-715398 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.979701802s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 18.128968ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-715398 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-715398 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6722a53c-46b1-43b3-8d5b-eb54cd56b462] Pending
helpers_test.go:344: "task-pv-pod" [6722a53c-46b1-43b3-8d5b-eb54cd56b462] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6722a53c-46b1-43b3-8d5b-eb54cd56b462] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003520189s
addons_test.go:590: (dbg) Run:  kubectl --context addons-715398 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-715398 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-715398 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-715398 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-715398 delete pod task-pv-pod: (1.147160299s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-715398 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-715398 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-715398 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cc53c7fb-8c73-4fa2-9eb9-ec210394f105] Pending
helpers_test.go:344: "task-pv-pod-restore" [cc53c7fb-8c73-4fa2-9eb9-ec210394f105] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cc53c7fb-8c73-4fa2-9eb9-ec210394f105] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005200509s
addons_test.go:632: (dbg) Run:  kubectl --context addons-715398 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-715398 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-715398 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781604877s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-715398 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-m54d4" [463016ed-becb-470d-96d8-94353980c287] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-m54d4" [463016ed-becb-470d-96d8-94353980c287] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-m54d4" [463016ed-becb-470d-96d8-94353980c287] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003855689s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable headlamp --alsologtostderr -v=1: (6.237905674s)
--- PASS: TestAddons/parallel/Headlamp (21.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-nmt5r" [b6ba34ba-bcd5-4f17-a883-feae1553bb6a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004032212s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-715398
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-715398 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-715398 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c0fa1267-5669-4a2e-80b0-1390c670dc9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c0fa1267-5669-4a2e-80b0-1390c670dc9b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c0fa1267-5669-4a2e-80b0-1390c670dc9b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003907482s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-715398 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 ssh "cat /opt/local-path-provisioner/pvc-aff7e032-36f4-43b4-b8ae-1b2682fe1dfa_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-715398 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-715398 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.537094909s)
--- PASS: TestAddons/parallel/LocalPath (57.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b777s" [7331841e-12ab-47ba-bd0a-65f840b6ec72] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.013385735s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-715398
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pgp5w" [3f1c5360-66e6-40fb-bec5-02c2d9372ad1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004876225s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-715398 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-715398 addons disable yakd --alsologtostderr -v=1: (5.930606475s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestCertOptions (65.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-735392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-735392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m4.379139958s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-735392 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-735392 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-735392 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-735392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-735392
--- PASS: TestCertOptions (65.70s)

                                                
                                    
x
+
TestCertExpiration (259.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-217794 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-217794 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (46.084148929s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-217794 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-217794 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (32.535085845s)
helpers_test.go:175: Cleaning up "cert-expiration-217794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-217794
E0912 23:14:35.672604   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-217794: (1.230515592s)
--- PASS: TestCertExpiration (259.85s)

                                                
                                    
x
+
TestForceSystemdFlag (84.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-737678 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0912 23:09:18.746500   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-737678 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m23.277271168s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-737678 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-737678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-737678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-737678: (1.00174599s)
--- PASS: TestForceSystemdFlag (84.48s)

                                                
                                    
x
+
TestForceSystemdEnv (107.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-548975 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-548975 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m46.282634905s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-548975 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-548975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-548975
--- PASS: TestForceSystemdEnv (107.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.69s)

                                                
                                    
x
+
TestErrorSpam/setup (44.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-899043 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-899043 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-899043 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-899043 --driver=kvm2  --container-runtime=containerd: (44.387214318s)
--- PASS: TestErrorSpam/setup (44.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (4.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop: (1.539438868s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop: (1.292785054s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-899043 --log_dir /tmp/nospam-899043 stop: (1.212184732s)
--- PASS: TestErrorSpam/stop (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-5898/.minikube/files/etc/test/nested/copy/13168/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-279627 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m23.356702304s)
--- PASS: TestFunctional/serial/StartWithProxy (83.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-279627 --alsologtostderr -v=8: (42.122657884s)
functional_test.go:663: soft start took 42.123380129s for "functional-279627" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-279627 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:3.1: (1.073006477s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:3.3: (1.109516074s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 cache add registry.k8s.io/pause:latest: (1.06505428s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-279627 /tmp/TestFunctionalserialCacheCmdcacheadd_local2405944082/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache add minikube-local-cache-test:functional-279627
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 cache add minikube-local-cache-test:functional-279627: (2.39527125s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache delete minikube-local-cache-test:functional-279627
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-279627
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.337514ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 kubectl -- --context functional-279627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-279627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-279627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.211556713s)
functional_test.go:761: restart took 52.211664933s for "functional-279627" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-279627 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 logs: (1.352345206s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 logs --file /tmp/TestFunctionalserialLogsFileCmd2563044585/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 logs --file /tmp/TestFunctionalserialLogsFileCmd2563044585/001/logs.txt: (1.350581784s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-279627 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-279627
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-279627: exit status 115 (264.994667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.55:30214 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-279627 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-279627 delete -f testdata/invalidsvc.yaml: (2.05061027s)
--- PASS: TestFunctional/serial/InvalidService (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 config get cpus: exit status 14 (53.012514ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 config get cpus: exit status 14 (46.111703ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-279627 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-279627 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28261: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-279627 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (126.783452ms)

                                                
                                                
-- stdout --
	* [functional-279627] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:14:56.935449   28122 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:14:56.935576   28122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:56.935586   28122 out.go:358] Setting ErrFile to fd 2...
	I0912 22:14:56.935592   28122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:56.935763   28122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:14:56.936776   28122 out.go:352] Setting JSON to false
	I0912 22:14:56.938002   28122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3438,"bootTime":1726175859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:14:56.938068   28122 start.go:139] virtualization: kvm guest
	I0912 22:14:56.939944   28122 out.go:177] * [functional-279627] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:14:56.941239   28122 notify.go:220] Checking for updates...
	I0912 22:14:56.941245   28122 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:14:56.942979   28122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:14:56.944468   28122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 22:14:56.946089   28122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 22:14:56.947267   28122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:14:56.948454   28122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:14:56.950028   28122 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:14:56.950411   28122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:14:56.950465   28122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:14:56.964744   28122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0912 22:14:56.965099   28122 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:14:56.965592   28122 main.go:141] libmachine: Using API Version  1
	I0912 22:14:56.965612   28122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:14:56.965940   28122 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:14:56.966141   28122 main.go:141] libmachine: (functional-279627) Calling .DriverName
	I0912 22:14:56.966389   28122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:14:56.966710   28122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:14:56.966775   28122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:14:56.980687   28122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0912 22:14:56.981101   28122 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:14:56.981513   28122 main.go:141] libmachine: Using API Version  1
	I0912 22:14:56.981532   28122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:14:56.981828   28122 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:14:56.982006   28122 main.go:141] libmachine: (functional-279627) Calling .DriverName
	I0912 22:14:57.012634   28122 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:14:57.013826   28122 start.go:297] selected driver: kvm2
	I0912 22:14:57.013842   28122 start.go:901] validating driver "kvm2" against &{Name:functional-279627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-279627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:14:57.013946   28122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:14:57.015981   28122 out.go:201] 
	W0912 22:14:57.017204   28122 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 22:14:57.018446   28122 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-279627 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-279627 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (126.353161ms)

                                                
                                                
-- stdout --
	* [functional-279627] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:14:57.182695   28177 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:14:57.182791   28177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:57.182800   28177 out.go:358] Setting ErrFile to fd 2...
	I0912 22:14:57.182804   28177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:57.183039   28177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:14:57.183535   28177 out.go:352] Setting JSON to false
	I0912 22:14:57.184412   28177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3438,"bootTime":1726175859,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:14:57.184492   28177 start.go:139] virtualization: kvm guest
	I0912 22:14:57.186708   28177 out.go:177] * [functional-279627] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0912 22:14:57.188116   28177 notify.go:220] Checking for updates...
	I0912 22:14:57.188131   28177 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:14:57.189292   28177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:14:57.190460   28177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 22:14:57.191922   28177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 22:14:57.193167   28177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:14:57.194334   28177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:14:57.196062   28177 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:14:57.196689   28177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:14:57.196744   28177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:14:57.211206   28177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0912 22:14:57.211613   28177 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:14:57.212278   28177 main.go:141] libmachine: Using API Version  1
	I0912 22:14:57.212315   28177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:14:57.212702   28177 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:14:57.212867   28177 main.go:141] libmachine: (functional-279627) Calling .DriverName
	I0912 22:14:57.213082   28177 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:14:57.213384   28177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:14:57.213421   28177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:14:57.228483   28177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 22:14:57.228840   28177 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:14:57.229260   28177 main.go:141] libmachine: Using API Version  1
	I0912 22:14:57.229279   28177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:14:57.229619   28177 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:14:57.229840   28177 main.go:141] libmachine: (functional-279627) Calling .DriverName
	I0912 22:14:57.259774   28177 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0912 22:14:57.260853   28177 start.go:297] selected driver: kvm2
	I0912 22:14:57.260872   28177 start.go:901] validating driver "kvm2" against &{Name:functional-279627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-279627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:14:57.260979   28177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:14:57.263133   28177 out.go:201] 
	W0912 22:14:57.264351   28177 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 22:14:57.265485   28177 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-279627 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-279627 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tkzhx" [304c61de-2345-49c5-9091-5edc8bceaaf6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tkzhx" [304c61de-2345-49c5-9091-5edc8bceaaf6] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.309917146s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.55:30395
functional_test.go:1675: http://192.168.39.55:30395: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-tkzhx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.55:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.55:30395
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4184c781-00fd-4093-99f6-3629a16f13fb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003659459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-279627 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-279627 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-279627 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-279627 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-279627 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7a832487-db67-4618-a14c-3e2e9eaac81d] Pending
helpers_test.go:344: "sp-pod" [7a832487-db67-4618-a14c-3e2e9eaac81d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7a832487-db67-4618-a14c-3e2e9eaac81d] Running
E0912 22:15:01.413411   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004321239s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-279627 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-279627 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-279627 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2cf38013-d2d2-4bca-8c43-c5697a7c79f6] Pending
helpers_test.go:344: "sp-pod" [2cf38013-d2d2-4bca-8c43-c5697a7c79f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2cf38013-d2d2-4bca-8c43-c5697a7c79f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00464922s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-279627 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh -n functional-279627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cp functional-279627:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1809638346/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh -n functional-279627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh -n functional-279627 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-279627 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-2ggwv" [29b3a639-58b7-4d13-8638-e96037cd3d26] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-2ggwv" [29b3a639-58b7-4d13-8638-e96037cd3d26] Running
E0912 22:14:53.729437   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.005120384s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;": exit status 1 (304.992306ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;": exit status 1 (420.759291ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;": exit status 1 (181.756174ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;": exit status 1 (126.38983ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-279627 exec mysql-6cdb49bbb-2ggwv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13168/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /etc/test/nested/copy/13168/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13168.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /etc/ssl/certs/13168.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13168.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /usr/share/ca-certificates/13168.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/131682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /etc/ssl/certs/131682.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/131682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /usr/share/ca-certificates/131682.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-279627 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "sudo systemctl is-active docker": exit status 1 (224.76398ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "sudo systemctl is-active crio": exit status 1 (203.189858ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-279627 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-279627
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-279627
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-279627 image ls --format short --alsologtostderr:
I0912 22:15:05.963450   28459 out.go:345] Setting OutFile to fd 1 ...
I0912 22:15:05.963688   28459 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:05.963696   28459 out.go:358] Setting ErrFile to fd 2...
I0912 22:15:05.963700   28459 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:05.963858   28459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
I0912 22:15:05.964350   28459 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:05.964445   28459 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:05.964825   28459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:05.964873   28459 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:05.979912   28459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
I0912 22:15:05.980357   28459 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:05.980992   28459 main.go:141] libmachine: Using API Version  1
I0912 22:15:05.981020   28459 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:05.981408   28459 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:05.981611   28459 main.go:141] libmachine: (functional-279627) Calling .GetState
I0912 22:15:05.983616   28459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:05.983651   28459 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:05.998001   28459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
I0912 22:15:05.998412   28459 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:05.998868   28459 main.go:141] libmachine: Using API Version  1
I0912 22:15:05.998893   28459 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:05.999174   28459 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:05.999342   28459 main.go:141] libmachine: (functional-279627) Calling .DriverName
I0912 22:15:05.999524   28459 ssh_runner.go:195] Run: systemctl --version
I0912 22:15:05.999544   28459 main.go:141] libmachine: (functional-279627) Calling .GetSSHHostname
I0912 22:15:06.002203   28459 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.002585   28459 main.go:141] libmachine: (functional-279627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:55:c5", ip: ""} in network mk-functional-279627: {Iface:virbr1 ExpiryTime:2024-09-12 23:11:35 +0000 UTC Type:0 Mac:52:54:00:d3:55:c5 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-279627 Clientid:01:52:54:00:d3:55:c5}
I0912 22:15:06.002621   28459 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined IP address 192.168.39.55 and MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.002839   28459 main.go:141] libmachine: (functional-279627) Calling .GetSSHPort
I0912 22:15:06.002998   28459 main.go:141] libmachine: (functional-279627) Calling .GetSSHKeyPath
I0912 22:15:06.003134   28459 main.go:141] libmachine: (functional-279627) Calling .GetSSHUsername
I0912 22:15:06.003294   28459 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/functional-279627/id_rsa Username:docker}
I0912 22:15:06.100446   28459 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 22:15:06.164720   28459 main.go:141] libmachine: Making call to close driver server
I0912 22:15:06.164736   28459 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:06.165015   28459 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:06.165040   28459 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:06.165064   28459 main.go:141] libmachine: Making call to close driver server
I0912 22:15:06.165075   28459 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:06.165267   28459 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:06.165281   28459 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-279627 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:9aa1fa | 20.2MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:129686 | 36.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:2e96e5 | 56.9MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:175ffd | 26.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/minikube-local-cache-test | functional-279627  | sha256:413fec | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest             | sha256:39286a | 71MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:6bab77 | 28MB   |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/kicbase/echo-server               | functional-279627  | sha256:9056ab | 2.37MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:60c005 | 30.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-279627 image ls --format table --alsologtostderr:
I0912 22:15:10.148695   29031 out.go:345] Setting OutFile to fd 1 ...
I0912 22:15:10.148961   29031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:10.148972   29031 out.go:358] Setting ErrFile to fd 2...
I0912 22:15:10.148979   29031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:10.149174   29031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
I0912 22:15:10.149749   29031 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:10.149866   29031 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:10.150276   29031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:10.150326   29031 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:10.164498   29031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
I0912 22:15:10.164900   29031 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:10.165395   29031 main.go:141] libmachine: Using API Version  1
I0912 22:15:10.165418   29031 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:10.165772   29031 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:10.165947   29031 main.go:141] libmachine: (functional-279627) Calling .GetState
I0912 22:15:10.167599   29031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:10.167639   29031 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:10.181433   29031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
I0912 22:15:10.181770   29031 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:10.182159   29031 main.go:141] libmachine: Using API Version  1
I0912 22:15:10.182176   29031 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:10.182492   29031 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:10.182667   29031 main.go:141] libmachine: (functional-279627) Calling .DriverName
I0912 22:15:10.182851   29031 ssh_runner.go:195] Run: systemctl --version
I0912 22:15:10.182879   29031 main.go:141] libmachine: (functional-279627) Calling .GetSSHHostname
I0912 22:15:10.185211   29031 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:10.185606   29031 main.go:141] libmachine: (functional-279627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:55:c5", ip: ""} in network mk-functional-279627: {Iface:virbr1 ExpiryTime:2024-09-12 23:11:35 +0000 UTC Type:0 Mac:52:54:00:d3:55:c5 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-279627 Clientid:01:52:54:00:d3:55:c5}
I0912 22:15:10.185640   29031 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined IP address 192.168.39.55 and MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:10.185777   29031 main.go:141] libmachine: (functional-279627) Calling .GetSSHPort
I0912 22:15:10.185921   29031 main.go:141] libmachine: (functional-279627) Calling .GetSSHKeyPath
I0912 22:15:10.186050   29031 main.go:141] libmachine: (functional-279627) Calling .GetSSHUsername
I0912 22:15:10.186151   29031 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/functional-279627/id_rsa Username:docker}
I0912 22:15:10.272157   29031 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 22:15:10.307135   29031 main.go:141] libmachine: Making call to close driver server
I0912 22:15:10.307149   29031 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:10.307389   29031 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:10.307408   29031 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:10.307415   29031 main.go:141] libmachine: Making call to close driver server
I0912 22:15:10.307433   29031 main.go:141] libmachine: (functional-279627) DBG | Closing plugin on server side
I0912 22:15:10.307462   29031 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:10.307699   29031 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:10.307713   29031 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:10.307722   29031 main.go:141] libmachine: (functional-279627) DBG | Closing plugin on server side
E0912 22:15:11.655252   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-279627 image ls --format json --alsologtostderr:
[{"id":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"56909194"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-279627"],"size":"2372971"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f
9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"71027698"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sh
a256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"30211884"},{"id":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"20177215"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoT
ags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"36793393"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"28047142"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:413fec0d46d0567be6c33c0302e539e7b5bf9b3f2a6cb0c4be8c0dc403323109","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache
-test:functional-279627"],"size":"991"},{"id":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"26221554"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-279627 image ls --format json --alsologtostderr:
I0912 22:15:09.941306   29007 out.go:345] Setting OutFile to fd 1 ...
I0912 22:15:09.941410   29007 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:09.941418   29007 out.go:358] Setting ErrFile to fd 2...
I0912 22:15:09.941423   29007 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:09.941589   29007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
I0912 22:15:09.942148   29007 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:09.942239   29007 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:09.942577   29007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:09.942622   29007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:09.957002   29007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33999
I0912 22:15:09.957457   29007 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:09.957979   29007 main.go:141] libmachine: Using API Version  1
I0912 22:15:09.958003   29007 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:09.958308   29007 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:09.958476   29007 main.go:141] libmachine: (functional-279627) Calling .GetState
I0912 22:15:09.960097   29007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:09.960134   29007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:09.973937   29007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
I0912 22:15:09.974274   29007 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:09.974688   29007 main.go:141] libmachine: Using API Version  1
I0912 22:15:09.974727   29007 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:09.975032   29007 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:09.975182   29007 main.go:141] libmachine: (functional-279627) Calling .DriverName
I0912 22:15:09.975353   29007 ssh_runner.go:195] Run: systemctl --version
I0912 22:15:09.975384   29007 main.go:141] libmachine: (functional-279627) Calling .GetSSHHostname
I0912 22:15:09.977901   29007 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:09.978246   29007 main.go:141] libmachine: (functional-279627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:55:c5", ip: ""} in network mk-functional-279627: {Iface:virbr1 ExpiryTime:2024-09-12 23:11:35 +0000 UTC Type:0 Mac:52:54:00:d3:55:c5 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-279627 Clientid:01:52:54:00:d3:55:c5}
I0912 22:15:09.978282   29007 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined IP address 192.168.39.55 and MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:09.978371   29007 main.go:141] libmachine: (functional-279627) Calling .GetSSHPort
I0912 22:15:09.978530   29007 main.go:141] libmachine: (functional-279627) Calling .GetSSHKeyPath
I0912 22:15:09.978656   29007 main.go:141] libmachine: (functional-279627) Calling .GetSSHUsername
I0912 22:15:09.978769   29007 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/functional-279627/id_rsa Username:docker}
I0912 22:15:10.065384   29007 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 22:15:10.103997   29007 main.go:141] libmachine: Making call to close driver server
I0912 22:15:10.104012   29007 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:10.104335   29007 main.go:141] libmachine: (functional-279627) DBG | Closing plugin on server side
I0912 22:15:10.104341   29007 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:10.104383   29007 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:10.104397   29007 main.go:141] libmachine: Making call to close driver server
I0912 22:15:10.104408   29007 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:10.104643   29007 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:10.104659   29007 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-279627 image ls --format yaml --alsologtostderr:
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:413fec0d46d0567be6c33c0302e539e7b5bf9b3f2a6cb0c4be8c0dc403323109
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-279627
size: "991"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "26221554"
- id: sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "30211884"
- id: sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "20177215"
- id: sha256:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "71027698"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "36793393"
- id: sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "56909194"
- id: sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "28047142"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-279627
size: "2372971"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-279627 image ls --format yaml --alsologtostderr:
I0912 22:15:06.211152   28512 out.go:345] Setting OutFile to fd 1 ...
I0912 22:15:06.211279   28512 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:06.211291   28512 out.go:358] Setting ErrFile to fd 2...
I0912 22:15:06.211299   28512 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:06.211498   28512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
I0912 22:15:06.212046   28512 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:06.212183   28512 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:06.212570   28512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:06.212618   28512 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:06.231970   28512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
I0912 22:15:06.232525   28512 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:06.233072   28512 main.go:141] libmachine: Using API Version  1
I0912 22:15:06.233091   28512 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:06.233502   28512 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:06.233695   28512 main.go:141] libmachine: (functional-279627) Calling .GetState
I0912 22:15:06.235858   28512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:06.235917   28512 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:06.250326   28512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
I0912 22:15:06.250771   28512 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:06.251220   28512 main.go:141] libmachine: Using API Version  1
I0912 22:15:06.251236   28512 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:06.251681   28512 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:06.251854   28512 main.go:141] libmachine: (functional-279627) Calling .DriverName
I0912 22:15:06.252042   28512 ssh_runner.go:195] Run: systemctl --version
I0912 22:15:06.252077   28512 main.go:141] libmachine: (functional-279627) Calling .GetSSHHostname
I0912 22:15:06.254368   28512 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.254712   28512 main.go:141] libmachine: (functional-279627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:55:c5", ip: ""} in network mk-functional-279627: {Iface:virbr1 ExpiryTime:2024-09-12 23:11:35 +0000 UTC Type:0 Mac:52:54:00:d3:55:c5 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-279627 Clientid:01:52:54:00:d3:55:c5}
I0912 22:15:06.254744   28512 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined IP address 192.168.39.55 and MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.254895   28512 main.go:141] libmachine: (functional-279627) Calling .GetSSHPort
I0912 22:15:06.255092   28512 main.go:141] libmachine: (functional-279627) Calling .GetSSHKeyPath
I0912 22:15:06.255324   28512 main.go:141] libmachine: (functional-279627) Calling .GetSSHUsername
I0912 22:15:06.255507   28512 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/functional-279627/id_rsa Username:docker}
I0912 22:15:06.350419   28512 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 22:15:06.391911   28512 main.go:141] libmachine: Making call to close driver server
I0912 22:15:06.391924   28512 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:06.392153   28512 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:06.392177   28512 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:06.392187   28512 main.go:141] libmachine: Making call to close driver server
I0912 22:15:06.392195   28512 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:06.392403   28512 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:06.392428   28512 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh pgrep buildkitd: exit status 1 (192.413422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image build -t localhost/my-image:functional-279627 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 image build -t localhost/my-image:functional-279627 testdata/build --alsologtostderr: (5.227512536s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-279627 image build -t localhost/my-image:functional-279627 testdata/build --alsologtostderr:
I0912 22:15:06.637117   28623 out.go:345] Setting OutFile to fd 1 ...
I0912 22:15:06.637306   28623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:06.637315   28623 out.go:358] Setting ErrFile to fd 2...
I0912 22:15:06.637320   28623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:15:06.637506   28623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
I0912 22:15:06.638121   28623 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:06.638838   28623 config.go:182] Loaded profile config "functional-279627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:15:06.639426   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:06.639478   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:06.655347   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
I0912 22:15:06.655765   28623 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:06.656310   28623 main.go:141] libmachine: Using API Version  1
I0912 22:15:06.656323   28623 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:06.656695   28623 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:06.656876   28623 main.go:141] libmachine: (functional-279627) Calling .GetState
I0912 22:15:06.658695   28623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 22:15:06.658721   28623 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 22:15:06.674382   28623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
I0912 22:15:06.674695   28623 main.go:141] libmachine: () Calling .GetVersion
I0912 22:15:06.675127   28623 main.go:141] libmachine: Using API Version  1
I0912 22:15:06.675140   28623 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 22:15:06.675440   28623 main.go:141] libmachine: () Calling .GetMachineName
I0912 22:15:06.675585   28623 main.go:141] libmachine: (functional-279627) Calling .DriverName
I0912 22:15:06.675769   28623 ssh_runner.go:195] Run: systemctl --version
I0912 22:15:06.675800   28623 main.go:141] libmachine: (functional-279627) Calling .GetSSHHostname
I0912 22:15:06.678567   28623 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.678965   28623 main.go:141] libmachine: (functional-279627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:55:c5", ip: ""} in network mk-functional-279627: {Iface:virbr1 ExpiryTime:2024-09-12 23:11:35 +0000 UTC Type:0 Mac:52:54:00:d3:55:c5 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:functional-279627 Clientid:01:52:54:00:d3:55:c5}
I0912 22:15:06.679039   28623 main.go:141] libmachine: (functional-279627) DBG | domain functional-279627 has defined IP address 192.168.39.55 and MAC address 52:54:00:d3:55:c5 in network mk-functional-279627
I0912 22:15:06.679218   28623 main.go:141] libmachine: (functional-279627) Calling .GetSSHPort
I0912 22:15:06.679420   28623 main.go:141] libmachine: (functional-279627) Calling .GetSSHKeyPath
I0912 22:15:06.679564   28623 main.go:141] libmachine: (functional-279627) Calling .GetSSHUsername
I0912 22:15:06.679672   28623 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/functional-279627/id_rsa Username:docker}
I0912 22:15:06.784506   28623 build_images.go:161] Building image from path: /tmp/build.845557854.tar
I0912 22:15:06.784580   28623 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 22:15:06.804235   28623 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.845557854.tar
I0912 22:15:06.810483   28623 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.845557854.tar: stat -c "%s %y" /var/lib/minikube/build/build.845557854.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.845557854.tar': No such file or directory
I0912 22:15:06.810511   28623 ssh_runner.go:362] scp /tmp/build.845557854.tar --> /var/lib/minikube/build/build.845557854.tar (3072 bytes)
I0912 22:15:06.835452   28623 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.845557854
I0912 22:15:06.846507   28623 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.845557854 -xf /var/lib/minikube/build/build.845557854.tar
I0912 22:15:06.857423   28623 containerd.go:394] Building image: /var/lib/minikube/build/build.845557854
I0912 22:15:06.857493   28623 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.845557854 --local dockerfile=/var/lib/minikube/build/build.845557854 --output type=image,name=localhost/my-image:functional-279627
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:3f2797a0c172e49996ab1d3a9ed03339d9a5020cb9decca252ead83347bea975
#8 exporting manifest sha256:3f2797a0c172e49996ab1d3a9ed03339d9a5020cb9decca252ead83347bea975 0.0s done
#8 exporting config sha256:5f4361c18e00ce4267ea6ff7416c759df63b6d881d9d0a1a27ed2562017348fd 0.0s done
#8 naming to localhost/my-image:functional-279627 done
#8 DONE 0.2s
I0912 22:15:11.777300   28623 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.845557854 --local dockerfile=/var/lib/minikube/build/build.845557854 --output type=image,name=localhost/my-image:functional-279627: (4.91977994s)
I0912 22:15:11.777395   28623 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.845557854
I0912 22:15:11.794143   28623 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.845557854.tar
I0912 22:15:11.812766   28623 build_images.go:217] Built localhost/my-image:functional-279627 from /tmp/build.845557854.tar
I0912 22:15:11.812805   28623 build_images.go:133] succeeded building to: functional-279627
I0912 22:15:11.812810   28623 build_images.go:134] failed building to: 
I0912 22:15:11.812865   28623 main.go:141] libmachine: Making call to close driver server
I0912 22:15:11.812880   28623 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:11.813085   28623 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:11.813112   28623 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:11.813123   28623 main.go:141] libmachine: Making call to close driver server
I0912 22:15:11.813131   28623 main.go:141] libmachine: (functional-279627) Calling .Close
I0912 22:15:11.813328   28623 main.go:141] libmachine: Successfully made call to close driver server
I0912 22:15:11.813343   28623 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 22:15:11.813352   28623 main.go:141] libmachine: (functional-279627) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
2024/09/12 22:15:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.680690233s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-279627
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image load --daemon kicbase/echo-server:functional-279627 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 image load --daemon kicbase/echo-server:functional-279627 --alsologtostderr: (1.071736627s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image load --daemon kicbase/echo-server:functional-279627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.187731101s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-279627
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image load --daemon kicbase/echo-server:functional-279627 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-279627 image load --daemon kicbase/echo-server:functional-279627 --alsologtostderr: (1.432236687s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image save kicbase/echo-server:functional-279627 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image rm kicbase/echo-server:functional-279627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-279627
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 image save --daemon kicbase/echo-server:functional-279627 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-279627
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "244.176253ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.731697ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdany-port731409132/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726179287365393503" to /tmp/TestFunctionalparallelMountCmdany-port731409132/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726179287365393503" to /tmp/TestFunctionalparallelMountCmdany-port731409132/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726179287365393503" to /tmp/TestFunctionalparallelMountCmdany-port731409132/001/test-1726179287365393503
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.888591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 22:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 22:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 22:14 test-1726179287365393503
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh cat /mount-9p/test-1726179287365393503
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-279627 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a728a336-169b-419b-b142-4fc398974c7a] Pending
helpers_test.go:344: "busybox-mount" [a728a336-169b-419b-b142-4fc398974c7a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0912 22:14:51.159311   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.166300   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.177712   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.199044   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.240447   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.321908   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.483608   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:51.805628   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:14:52.447902   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [a728a336-169b-419b-b142-4fc398974c7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a728a336-169b-419b-b142-4fc398974c7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004079832s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-279627 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdany-port731409132/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "217.087181ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "41.81799ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-279627 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-279627 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-lbgdh" [6d3e6e85-d613-4ab6-a71c-be84acc71ddb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-lbgdh" [6d3e6e85-d613-4ab6-a71c-be84acc71ddb] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.107318485s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service list -o json
functional_test.go:1494: Took "455.634643ms" to run "out/minikube-linux-amd64 -p functional-279627 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service --namespace=default --https --url hello-node
E0912 22:14:56.291490   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1522: found endpoint: https://192.168.39.55:32144
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.55:32144
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdspecific-port1691185691/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.611871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdspecific-port1691185691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "sudo umount -f /mount-9p": exit status 1 (184.955674ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-279627 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdspecific-port1691185691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T" /mount1: exit status 1 (201.026275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-279627 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-279627 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-279627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1521939910/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-279627
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-279627
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-279627
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (252.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-233146 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0912 22:15:32.137313   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:13.099442   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:17:35.022650   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-233146 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m11.566725246s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
E0912 22:19:35.673534   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.679919   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.691285   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.712536   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.754032   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.836508   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:35.997763   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:36.319016   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (252.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- rollout status deployment/busybox
E0912 22:19:36.961238   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:38.243285   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:40.805816   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-233146 -- rollout status deployment/busybox: (5.706569831s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-5pv7r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-dt9q6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-xzgsn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-5pv7r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-dt9q6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-xzgsn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-5pv7r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-dt9q6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-xzgsn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-5pv7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-5pv7r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-dt9q6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-dt9q6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-xzgsn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-233146 -- exec busybox-7dff88458-xzgsn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-233146 -v=7 --alsologtostderr
E0912 22:19:45.928095   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:51.158974   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:19:56.170008   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:20:16.652314   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:20:18.864985   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-233146 -v=7 --alsologtostderr: (57.54432929s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-233146 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp testdata/cp-test.txt ha-233146:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile133727080/001/cp-test_ha-233146.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146:/home/docker/cp-test.txt ha-233146-m02:/home/docker/cp-test_ha-233146_ha-233146-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test_ha-233146_ha-233146-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146:/home/docker/cp-test.txt ha-233146-m03:/home/docker/cp-test_ha-233146_ha-233146-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test_ha-233146_ha-233146-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146:/home/docker/cp-test.txt ha-233146-m04:/home/docker/cp-test_ha-233146_ha-233146-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test_ha-233146_ha-233146-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp testdata/cp-test.txt ha-233146-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile133727080/001/cp-test_ha-233146-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m02:/home/docker/cp-test.txt ha-233146:/home/docker/cp-test_ha-233146-m02_ha-233146.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test_ha-233146-m02_ha-233146.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m02:/home/docker/cp-test.txt ha-233146-m03:/home/docker/cp-test_ha-233146-m02_ha-233146-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test_ha-233146-m02_ha-233146-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m02:/home/docker/cp-test.txt ha-233146-m04:/home/docker/cp-test_ha-233146-m02_ha-233146-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test_ha-233146-m02_ha-233146-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp testdata/cp-test.txt ha-233146-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile133727080/001/cp-test_ha-233146-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m03:/home/docker/cp-test.txt ha-233146:/home/docker/cp-test_ha-233146-m03_ha-233146.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test_ha-233146-m03_ha-233146.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m03:/home/docker/cp-test.txt ha-233146-m02:/home/docker/cp-test_ha-233146-m03_ha-233146-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test_ha-233146-m03_ha-233146-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m03:/home/docker/cp-test.txt ha-233146-m04:/home/docker/cp-test_ha-233146-m03_ha-233146-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test_ha-233146-m03_ha-233146-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp testdata/cp-test.txt ha-233146-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile133727080/001/cp-test_ha-233146-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m04:/home/docker/cp-test.txt ha-233146:/home/docker/cp-test_ha-233146-m04_ha-233146.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146 "sudo cat /home/docker/cp-test_ha-233146-m04_ha-233146.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m04:/home/docker/cp-test.txt ha-233146-m02:/home/docker/cp-test_ha-233146-m04_ha-233146-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m02 "sudo cat /home/docker/cp-test_ha-233146-m04_ha-233146-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 cp ha-233146-m04:/home/docker/cp-test.txt ha-233146-m03:/home/docker/cp-test_ha-233146-m04_ha-233146-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 ssh -n ha-233146-m03 "sudo cat /home/docker/cp-test_ha-233146-m04_ha-233146-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 node stop m02 -v=7 --alsologtostderr
E0912 22:20:57.613841   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:22:19.535222   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-233146 node stop m02 -v=7 --alsologtostderr: (1m31.51208264s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr: exit status 7 (623.436369ms)

                                                
                                                
-- stdout --
	ha-233146
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-233146-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-233146-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-233146-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:22:28.053377   34238 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:22:28.053662   34238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:22:28.053746   34238 out.go:358] Setting ErrFile to fd 2...
	I0912 22:22:28.053759   34238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:22:28.054005   34238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:22:28.054176   34238 out.go:352] Setting JSON to false
	I0912 22:22:28.054204   34238 mustload.go:65] Loading cluster: ha-233146
	I0912 22:22:28.054249   34238 notify.go:220] Checking for updates...
	I0912 22:22:28.054533   34238 config.go:182] Loaded profile config "ha-233146": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:22:28.054546   34238 status.go:255] checking status of ha-233146 ...
	I0912 22:22:28.054900   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.054951   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.073104   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46361
	I0912 22:22:28.073591   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.074337   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.074362   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.074795   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.075015   34238 main.go:141] libmachine: (ha-233146) Calling .GetState
	I0912 22:22:28.076648   34238 status.go:330] ha-233146 host status = "Running" (err=<nil>)
	I0912 22:22:28.076667   34238 host.go:66] Checking if "ha-233146" exists ...
	I0912 22:22:28.076989   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.077073   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.092373   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0912 22:22:28.092695   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.093078   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.093095   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.093369   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.093534   34238 main.go:141] libmachine: (ha-233146) Calling .GetIP
	I0912 22:22:28.096326   34238 main.go:141] libmachine: (ha-233146) DBG | domain ha-233146 has defined MAC address 52:54:00:f8:ce:d6 in network mk-ha-233146
	I0912 22:22:28.096687   34238 main.go:141] libmachine: (ha-233146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ce:d6", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:15:38 +0000 UTC Type:0 Mac:52:54:00:f8:ce:d6 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-233146 Clientid:01:52:54:00:f8:ce:d6}
	I0912 22:22:28.096718   34238 main.go:141] libmachine: (ha-233146) DBG | domain ha-233146 has defined IP address 192.168.39.244 and MAC address 52:54:00:f8:ce:d6 in network mk-ha-233146
	I0912 22:22:28.096855   34238 host.go:66] Checking if "ha-233146" exists ...
	I0912 22:22:28.097127   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.097159   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.110737   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0912 22:22:28.111183   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.111601   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.111623   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.111910   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.112073   34238 main.go:141] libmachine: (ha-233146) Calling .DriverName
	I0912 22:22:28.112223   34238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:22:28.112241   34238 main.go:141] libmachine: (ha-233146) Calling .GetSSHHostname
	I0912 22:22:28.114686   34238 main.go:141] libmachine: (ha-233146) DBG | domain ha-233146 has defined MAC address 52:54:00:f8:ce:d6 in network mk-ha-233146
	I0912 22:22:28.115142   34238 main.go:141] libmachine: (ha-233146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ce:d6", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:15:38 +0000 UTC Type:0 Mac:52:54:00:f8:ce:d6 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-233146 Clientid:01:52:54:00:f8:ce:d6}
	I0912 22:22:28.115172   34238 main.go:141] libmachine: (ha-233146) DBG | domain ha-233146 has defined IP address 192.168.39.244 and MAC address 52:54:00:f8:ce:d6 in network mk-ha-233146
	I0912 22:22:28.115303   34238 main.go:141] libmachine: (ha-233146) Calling .GetSSHPort
	I0912 22:22:28.115453   34238 main.go:141] libmachine: (ha-233146) Calling .GetSSHKeyPath
	I0912 22:22:28.115605   34238 main.go:141] libmachine: (ha-233146) Calling .GetSSHUsername
	I0912 22:22:28.115747   34238 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/ha-233146/id_rsa Username:docker}
	I0912 22:22:28.199063   34238 ssh_runner.go:195] Run: systemctl --version
	I0912 22:22:28.206249   34238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:22:28.222710   34238 kubeconfig.go:125] found "ha-233146" server: "https://192.168.39.254:8443"
	I0912 22:22:28.222747   34238 api_server.go:166] Checking apiserver status ...
	I0912 22:22:28.222799   34238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:22:28.239737   34238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1074/cgroup
	W0912 22:22:28.248781   34238 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1074/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:22:28.248820   34238 ssh_runner.go:195] Run: ls
	I0912 22:22:28.253116   34238 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:22:28.257311   34238 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:22:28.257332   34238 status.go:422] ha-233146 apiserver status = Running (err=<nil>)
	I0912 22:22:28.257341   34238 status.go:257] ha-233146 status: &{Name:ha-233146 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:22:28.257363   34238 status.go:255] checking status of ha-233146-m02 ...
	I0912 22:22:28.257652   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.257709   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.272580   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0912 22:22:28.272906   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.273407   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.273427   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.273770   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.273964   34238 main.go:141] libmachine: (ha-233146-m02) Calling .GetState
	I0912 22:22:28.275533   34238 status.go:330] ha-233146-m02 host status = "Stopped" (err=<nil>)
	I0912 22:22:28.275545   34238 status.go:343] host is not running, skipping remaining checks
	I0912 22:22:28.275551   34238 status.go:257] ha-233146-m02 status: &{Name:ha-233146-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:22:28.275572   34238 status.go:255] checking status of ha-233146-m03 ...
	I0912 22:22:28.275852   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.275906   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.289701   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0912 22:22:28.290093   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.290497   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.290517   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.290852   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.291018   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetState
	I0912 22:22:28.292489   34238 status.go:330] ha-233146-m03 host status = "Running" (err=<nil>)
	I0912 22:22:28.292505   34238 host.go:66] Checking if "ha-233146-m03" exists ...
	I0912 22:22:28.292793   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.292821   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.306955   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0912 22:22:28.307423   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.307946   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.307971   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.308271   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.308456   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetIP
	I0912 22:22:28.311314   34238 main.go:141] libmachine: (ha-233146-m03) DBG | domain ha-233146-m03 has defined MAC address 52:54:00:39:04:ac in network mk-ha-233146
	I0912 22:22:28.311774   34238 main.go:141] libmachine: (ha-233146-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:04:ac", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:18:39 +0000 UTC Type:0 Mac:52:54:00:39:04:ac Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-233146-m03 Clientid:01:52:54:00:39:04:ac}
	I0912 22:22:28.311799   34238 main.go:141] libmachine: (ha-233146-m03) DBG | domain ha-233146-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:39:04:ac in network mk-ha-233146
	I0912 22:22:28.311927   34238 host.go:66] Checking if "ha-233146-m03" exists ...
	I0912 22:22:28.312294   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.312334   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.327281   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42437
	I0912 22:22:28.327676   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.328070   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.328086   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.328360   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.328547   34238 main.go:141] libmachine: (ha-233146-m03) Calling .DriverName
	I0912 22:22:28.328723   34238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:22:28.328744   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetSSHHostname
	I0912 22:22:28.331107   34238 main.go:141] libmachine: (ha-233146-m03) DBG | domain ha-233146-m03 has defined MAC address 52:54:00:39:04:ac in network mk-ha-233146
	I0912 22:22:28.331553   34238 main.go:141] libmachine: (ha-233146-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:04:ac", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:18:39 +0000 UTC Type:0 Mac:52:54:00:39:04:ac Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-233146-m03 Clientid:01:52:54:00:39:04:ac}
	I0912 22:22:28.331581   34238 main.go:141] libmachine: (ha-233146-m03) DBG | domain ha-233146-m03 has defined IP address 192.168.39.57 and MAC address 52:54:00:39:04:ac in network mk-ha-233146
	I0912 22:22:28.331731   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetSSHPort
	I0912 22:22:28.331888   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetSSHKeyPath
	I0912 22:22:28.332041   34238 main.go:141] libmachine: (ha-233146-m03) Calling .GetSSHUsername
	I0912 22:22:28.332172   34238 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/ha-233146-m03/id_rsa Username:docker}
	I0912 22:22:28.414523   34238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:22:28.437206   34238 kubeconfig.go:125] found "ha-233146" server: "https://192.168.39.254:8443"
	I0912 22:22:28.437229   34238 api_server.go:166] Checking apiserver status ...
	I0912 22:22:28.437279   34238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:22:28.458604   34238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0912 22:22:28.469442   34238 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:22:28.469507   34238 ssh_runner.go:195] Run: ls
	I0912 22:22:28.474163   34238 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:22:28.479750   34238 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:22:28.479778   34238 status.go:422] ha-233146-m03 apiserver status = Running (err=<nil>)
	I0912 22:22:28.479790   34238 status.go:257] ha-233146-m03 status: &{Name:ha-233146-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:22:28.479808   34238 status.go:255] checking status of ha-233146-m04 ...
	I0912 22:22:28.480170   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.480203   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.494701   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0912 22:22:28.495063   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.495475   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.495493   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.495793   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.495955   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetState
	I0912 22:22:28.497271   34238 status.go:330] ha-233146-m04 host status = "Running" (err=<nil>)
	I0912 22:22:28.497284   34238 host.go:66] Checking if "ha-233146-m04" exists ...
	I0912 22:22:28.497702   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.497745   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.512563   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0912 22:22:28.512939   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.513376   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.513395   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.513705   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.513885   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetIP
	I0912 22:22:28.516628   34238 main.go:141] libmachine: (ha-233146-m04) DBG | domain ha-233146-m04 has defined MAC address 52:54:00:c8:4f:ed in network mk-ha-233146
	I0912 22:22:28.517029   34238 main.go:141] libmachine: (ha-233146-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4f:ed", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:20:00 +0000 UTC Type:0 Mac:52:54:00:c8:4f:ed Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-233146-m04 Clientid:01:52:54:00:c8:4f:ed}
	I0912 22:22:28.517055   34238 main.go:141] libmachine: (ha-233146-m04) DBG | domain ha-233146-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:c8:4f:ed in network mk-ha-233146
	I0912 22:22:28.517187   34238 host.go:66] Checking if "ha-233146-m04" exists ...
	I0912 22:22:28.517560   34238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:22:28.517596   34238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:22:28.531815   34238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0912 22:22:28.532155   34238 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:22:28.532554   34238 main.go:141] libmachine: Using API Version  1
	I0912 22:22:28.532578   34238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:22:28.532863   34238 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:22:28.533033   34238 main.go:141] libmachine: (ha-233146-m04) Calling .DriverName
	I0912 22:22:28.533194   34238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:22:28.533218   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetSSHHostname
	I0912 22:22:28.535931   34238 main.go:141] libmachine: (ha-233146-m04) DBG | domain ha-233146-m04 has defined MAC address 52:54:00:c8:4f:ed in network mk-ha-233146
	I0912 22:22:28.536361   34238 main.go:141] libmachine: (ha-233146-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4f:ed", ip: ""} in network mk-ha-233146: {Iface:virbr1 ExpiryTime:2024-09-12 23:20:00 +0000 UTC Type:0 Mac:52:54:00:c8:4f:ed Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-233146-m04 Clientid:01:52:54:00:c8:4f:ed}
	I0912 22:22:28.536383   34238 main.go:141] libmachine: (ha-233146-m04) DBG | domain ha-233146-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:c8:4f:ed in network mk-ha-233146
	I0912 22:22:28.536595   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetSSHPort
	I0912 22:22:28.536785   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetSSHKeyPath
	I0912 22:22:28.536921   34238 main.go:141] libmachine: (ha-233146-m04) Calling .GetSSHUsername
	I0912 22:22:28.537105   34238 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/ha-233146-m04/id_rsa Username:docker}
	I0912 22:22:28.618713   34238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:22:28.634816   34238 status.go:257] ha-233146-m04 status: &{Name:ha-233146-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-233146 node start m02 -v=7 --alsologtostderr: (39.704670907s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-233146 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-233146 -v=7 --alsologtostderr
E0912 22:24:35.673252   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:24:51.159328   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:25:03.377234   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-233146 -v=7 --alsologtostderr: (4m35.911352874s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-233146 --wait=true -v=7 --alsologtostderr
E0912 22:29:35.673227   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:29:51.159539   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-233146 --wait=true -v=7 --alsologtostderr: (2m49.832025301s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-233146
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-233146 node delete m03 -v=7 --alsologtostderr: (6.004102536s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 stop -v=7 --alsologtostderr
E0912 22:31:14.226919   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:34:35.673437   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:34:51.158893   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-233146 stop -v=7 --alsologtostderr: (4m34.497304455s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr: exit status 7 (93.753301ms)

                                                
                                                
-- stdout --
	ha-233146
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-233146-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-233146-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:35:17.567650   38024 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:35:17.567984   38024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:35:17.568000   38024 out.go:358] Setting ErrFile to fd 2...
	I0912 22:35:17.568006   38024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:35:17.568430   38024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:35:17.568764   38024 out.go:352] Setting JSON to false
	I0912 22:35:17.568833   38024 notify.go:220] Checking for updates...
	I0912 22:35:17.568800   38024 mustload.go:65] Loading cluster: ha-233146
	I0912 22:35:17.569522   38024 config.go:182] Loaded profile config "ha-233146": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:35:17.569551   38024 status.go:255] checking status of ha-233146 ...
	I0912 22:35:17.569980   38024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:35:17.570020   38024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:35:17.584599   38024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0912 22:35:17.585020   38024 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:35:17.585530   38024 main.go:141] libmachine: Using API Version  1
	I0912 22:35:17.585557   38024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:35:17.585962   38024 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:35:17.586146   38024 main.go:141] libmachine: (ha-233146) Calling .GetState
	I0912 22:35:17.587706   38024 status.go:330] ha-233146 host status = "Stopped" (err=<nil>)
	I0912 22:35:17.587722   38024 status.go:343] host is not running, skipping remaining checks
	I0912 22:35:17.587730   38024 status.go:257] ha-233146 status: &{Name:ha-233146 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:35:17.587772   38024 status.go:255] checking status of ha-233146-m02 ...
	I0912 22:35:17.588036   38024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:35:17.588069   38024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:35:17.602046   38024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0912 22:35:17.602378   38024 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:35:17.602812   38024 main.go:141] libmachine: Using API Version  1
	I0912 22:35:17.602841   38024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:35:17.603148   38024 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:35:17.603365   38024 main.go:141] libmachine: (ha-233146-m02) Calling .GetState
	I0912 22:35:17.604707   38024 status.go:330] ha-233146-m02 host status = "Stopped" (err=<nil>)
	I0912 22:35:17.604722   38024 status.go:343] host is not running, skipping remaining checks
	I0912 22:35:17.604745   38024 status.go:257] ha-233146-m02 status: &{Name:ha-233146-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:35:17.604767   38024 status.go:255] checking status of ha-233146-m04 ...
	I0912 22:35:17.605157   38024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:35:17.605196   38024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:35:17.618968   38024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0912 22:35:17.619300   38024 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:35:17.619696   38024 main.go:141] libmachine: Using API Version  1
	I0912 22:35:17.619717   38024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:35:17.620004   38024 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:35:17.620162   38024 main.go:141] libmachine: (ha-233146-m04) Calling .GetState
	I0912 22:35:17.621853   38024 status.go:330] ha-233146-m04 host status = "Stopped" (err=<nil>)
	I0912 22:35:17.621866   38024 status.go:343] host is not running, skipping remaining checks
	I0912 22:35:17.621871   38024 status.go:257] ha-233146-m04 status: &{Name:ha-233146-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (156.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-233146 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0912 22:35:58.740747   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-233146 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m35.768936332s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (156.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-233146 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-233146 --control-plane -v=7 --alsologtostderr: (1m18.988930835s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-233146 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-392284 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0912 22:39:35.676557   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:51.160374   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-392284 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (56.223649257s)
--- PASS: TestJSONOutput/start/Command (56.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-392284 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-392284 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-392284 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-392284 --output=json --user=testUser: (6.575647667s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-476163 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-476163 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.107089ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a432786f-02b7-414c-8166-3b302a20b558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-476163] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"28c118ec-647e-4401-bfd2-885c5dd63b80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"36d8b9e6-ab71-4079-b73e-444255b62a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c75058bb-a075-4646-a6e4-a3fc2565633e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig"}}
	{"specversion":"1.0","id":"c570ce35-f606-4326-85e6-5c4dc0c6030e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube"}}
	{"specversion":"1.0","id":"54a9c272-6199-4b1b-8684-4dbb8cf1e329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"39a16432-b4c8-4509-b24c-5974941567e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ffdd9c2f-8b53-4ada-8dd0-8fdfa50fcb51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-476163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-476163
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-046510 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-046510 --driver=kvm2  --container-runtime=containerd: (44.25408413s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-049681 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-049681 --driver=kvm2  --container-runtime=containerd: (42.058902506s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-046510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-049681
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-049681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-049681
helpers_test.go:175: Cleaning up "first-046510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-046510
--- PASS: TestMinikubeProfile (88.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-063538 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-063538 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.829948713s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-063538 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-063538 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-080180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-080180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.013030116s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-063538 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-080180
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-080180: (1.276360673s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-080180
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-080180: (22.797324266s)
--- PASS: TestMountStart/serial/RestartStopped (23.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080180 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-568876 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0912 22:44:35.673443   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:51.159376   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-568876 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m52.669255679s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-568876 -- rollout status deployment/busybox: (5.6579665s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-9fk4x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-gnw9b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-9fk4x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-gnw9b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-9fk4x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-gnw9b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-9fk4x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-9fk4x -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-gnw9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-568876 -- exec busybox-7dff88458-gnw9b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-568876 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-568876 -v 3 --alsologtostderr: (53.083421142s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-568876 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp testdata/cp-test.txt multinode-568876:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile761469273/001/cp-test_multinode-568876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876:/home/docker/cp-test.txt multinode-568876-m02:/home/docker/cp-test_multinode-568876_multinode-568876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test_multinode-568876_multinode-568876-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876:/home/docker/cp-test.txt multinode-568876-m03:/home/docker/cp-test_multinode-568876_multinode-568876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test_multinode-568876_multinode-568876-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp testdata/cp-test.txt multinode-568876-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile761469273/001/cp-test_multinode-568876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m02:/home/docker/cp-test.txt multinode-568876:/home/docker/cp-test_multinode-568876-m02_multinode-568876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test_multinode-568876-m02_multinode-568876.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m02:/home/docker/cp-test.txt multinode-568876-m03:/home/docker/cp-test_multinode-568876-m02_multinode-568876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test_multinode-568876-m02_multinode-568876-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp testdata/cp-test.txt multinode-568876-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile761469273/001/cp-test_multinode-568876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m03:/home/docker/cp-test.txt multinode-568876:/home/docker/cp-test_multinode-568876-m03_multinode-568876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876 "sudo cat /home/docker/cp-test_multinode-568876-m03_multinode-568876.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 cp multinode-568876-m03:/home/docker/cp-test.txt multinode-568876-m02:/home/docker/cp-test_multinode-568876-m03_multinode-568876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 ssh -n multinode-568876-m02 "sudo cat /home/docker/cp-test_multinode-568876-m03_multinode-568876-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-568876 node stop m03: (1.301009735s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-568876 status: exit status 7 (412.789861ms)

                                                
                                                
-- stdout --
	multinode-568876
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-568876-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-568876-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr: exit status 7 (410.347769ms)

                                                
                                                
-- stdout --
	multinode-568876
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-568876-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-568876-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:46:21.594591   45677 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:46:21.594698   45677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:46:21.594706   45677 out.go:358] Setting ErrFile to fd 2...
	I0912 22:46:21.594710   45677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:46:21.594868   45677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:46:21.595011   45677 out.go:352] Setting JSON to false
	I0912 22:46:21.595035   45677 mustload.go:65] Loading cluster: multinode-568876
	I0912 22:46:21.595133   45677 notify.go:220] Checking for updates...
	I0912 22:46:21.595369   45677 config.go:182] Loaded profile config "multinode-568876": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:46:21.595381   45677 status.go:255] checking status of multinode-568876 ...
	I0912 22:46:21.595733   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.595796   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.615573   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0912 22:46:21.615997   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.616676   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.616702   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.617122   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.617379   45677 main.go:141] libmachine: (multinode-568876) Calling .GetState
	I0912 22:46:21.619187   45677 status.go:330] multinode-568876 host status = "Running" (err=<nil>)
	I0912 22:46:21.619209   45677 host.go:66] Checking if "multinode-568876" exists ...
	I0912 22:46:21.619461   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.619500   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.635110   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0912 22:46:21.635472   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.635884   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.635900   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.636213   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.636396   45677 main.go:141] libmachine: (multinode-568876) Calling .GetIP
	I0912 22:46:21.639041   45677 main.go:141] libmachine: (multinode-568876) DBG | domain multinode-568876 has defined MAC address 52:54:00:5c:20:48 in network mk-multinode-568876
	I0912 22:46:21.639421   45677 main.go:141] libmachine: (multinode-568876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:20:48", ip: ""} in network mk-multinode-568876: {Iface:virbr1 ExpiryTime:2024-09-12 23:43:32 +0000 UTC Type:0 Mac:52:54:00:5c:20:48 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-568876 Clientid:01:52:54:00:5c:20:48}
	I0912 22:46:21.639448   45677 main.go:141] libmachine: (multinode-568876) DBG | domain multinode-568876 has defined IP address 192.168.39.216 and MAC address 52:54:00:5c:20:48 in network mk-multinode-568876
	I0912 22:46:21.639555   45677 host.go:66] Checking if "multinode-568876" exists ...
	I0912 22:46:21.639890   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.639959   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.655086   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0912 22:46:21.655470   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.655991   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.656012   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.656313   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.656526   45677 main.go:141] libmachine: (multinode-568876) Calling .DriverName
	I0912 22:46:21.656707   45677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:46:21.656730   45677 main.go:141] libmachine: (multinode-568876) Calling .GetSSHHostname
	I0912 22:46:21.659536   45677 main.go:141] libmachine: (multinode-568876) DBG | domain multinode-568876 has defined MAC address 52:54:00:5c:20:48 in network mk-multinode-568876
	I0912 22:46:21.660069   45677 main.go:141] libmachine: (multinode-568876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:20:48", ip: ""} in network mk-multinode-568876: {Iface:virbr1 ExpiryTime:2024-09-12 23:43:32 +0000 UTC Type:0 Mac:52:54:00:5c:20:48 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-568876 Clientid:01:52:54:00:5c:20:48}
	I0912 22:46:21.660095   45677 main.go:141] libmachine: (multinode-568876) DBG | domain multinode-568876 has defined IP address 192.168.39.216 and MAC address 52:54:00:5c:20:48 in network mk-multinode-568876
	I0912 22:46:21.660216   45677 main.go:141] libmachine: (multinode-568876) Calling .GetSSHPort
	I0912 22:46:21.660377   45677 main.go:141] libmachine: (multinode-568876) Calling .GetSSHKeyPath
	I0912 22:46:21.660530   45677 main.go:141] libmachine: (multinode-568876) Calling .GetSSHUsername
	I0912 22:46:21.660647   45677 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/multinode-568876/id_rsa Username:docker}
	I0912 22:46:21.741068   45677 ssh_runner.go:195] Run: systemctl --version
	I0912 22:46:21.747175   45677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:46:21.761761   45677 kubeconfig.go:125] found "multinode-568876" server: "https://192.168.39.216:8443"
	I0912 22:46:21.761796   45677 api_server.go:166] Checking apiserver status ...
	I0912 22:46:21.761829   45677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:46:21.775374   45677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup
	W0912 22:46:21.784886   45677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:46:21.784935   45677 ssh_runner.go:195] Run: ls
	I0912 22:46:21.789199   45677 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I0912 22:46:21.793180   45677 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I0912 22:46:21.793199   45677 status.go:422] multinode-568876 apiserver status = Running (err=<nil>)
	I0912 22:46:21.793210   45677 status.go:257] multinode-568876 status: &{Name:multinode-568876 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:46:21.793227   45677 status.go:255] checking status of multinode-568876-m02 ...
	I0912 22:46:21.793504   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.793551   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.808415   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0912 22:46:21.808791   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.809154   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.809175   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.809496   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.809691   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetState
	I0912 22:46:21.811173   45677 status.go:330] multinode-568876-m02 host status = "Running" (err=<nil>)
	I0912 22:46:21.811190   45677 host.go:66] Checking if "multinode-568876-m02" exists ...
	I0912 22:46:21.811465   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.811501   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.826356   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0912 22:46:21.826723   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.827143   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.827166   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.827424   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.827591   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetIP
	I0912 22:46:21.830263   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | domain multinode-568876-m02 has defined MAC address 52:54:00:96:d4:c6 in network mk-multinode-568876
	I0912 22:46:21.830673   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:d4:c6", ip: ""} in network mk-multinode-568876: {Iface:virbr1 ExpiryTime:2024-09-12 23:44:34 +0000 UTC Type:0 Mac:52:54:00:96:d4:c6 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:multinode-568876-m02 Clientid:01:52:54:00:96:d4:c6}
	I0912 22:46:21.830711   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | domain multinode-568876-m02 has defined IP address 192.168.39.72 and MAC address 52:54:00:96:d4:c6 in network mk-multinode-568876
	I0912 22:46:21.830848   45677 host.go:66] Checking if "multinode-568876-m02" exists ...
	I0912 22:46:21.831133   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.831164   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.846669   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
	I0912 22:46:21.847022   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.847444   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.847463   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.847706   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.847867   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .DriverName
	I0912 22:46:21.848070   45677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:46:21.848086   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetSSHHostname
	I0912 22:46:21.850638   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | domain multinode-568876-m02 has defined MAC address 52:54:00:96:d4:c6 in network mk-multinode-568876
	I0912 22:46:21.851112   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:d4:c6", ip: ""} in network mk-multinode-568876: {Iface:virbr1 ExpiryTime:2024-09-12 23:44:34 +0000 UTC Type:0 Mac:52:54:00:96:d4:c6 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:multinode-568876-m02 Clientid:01:52:54:00:96:d4:c6}
	I0912 22:46:21.851131   45677 main.go:141] libmachine: (multinode-568876-m02) DBG | domain multinode-568876-m02 has defined IP address 192.168.39.72 and MAC address 52:54:00:96:d4:c6 in network mk-multinode-568876
	I0912 22:46:21.851296   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetSSHPort
	I0912 22:46:21.851470   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetSSHKeyPath
	I0912 22:46:21.851638   45677 main.go:141] libmachine: (multinode-568876-m02) Calling .GetSSHUsername
	I0912 22:46:21.851771   45677 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5898/.minikube/machines/multinode-568876-m02/id_rsa Username:docker}
	I0912 22:46:21.932711   45677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:46:21.946334   45677 status.go:257] multinode-568876-m02 status: &{Name:multinode-568876-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:46:21.946364   45677 status.go:255] checking status of multinode-568876-m03 ...
	I0912 22:46:21.946667   45677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:46:21.946701   45677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:46:21.961787   45677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0912 22:46:21.962196   45677 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:46:21.962620   45677 main.go:141] libmachine: Using API Version  1
	I0912 22:46:21.962639   45677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:46:21.962943   45677 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:46:21.963132   45677 main.go:141] libmachine: (multinode-568876-m03) Calling .GetState
	I0912 22:46:21.964659   45677 status.go:330] multinode-568876-m03 host status = "Stopped" (err=<nil>)
	I0912 22:46:21.964673   45677 status.go:343] host is not running, skipping remaining checks
	I0912 22:46:21.964680   45677 status.go:257] multinode-568876-m03 status: &{Name:multinode-568876-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-568876 node start m03 -v=7 --alsologtostderr: (33.224071508s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-568876
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-568876
E0912 22:47:54.231004   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:35.676246   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:51.160226   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-568876: (3m4.21635284s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-568876 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-568876 --wait=true -v=8 --alsologtostderr: (2m25.785291478s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-568876
--- PASS: TestMultiNode/serial/RestartKeepsNodes (330.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-568876 node delete m03: (1.433873823s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 stop
E0912 22:52:38.744290   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:35.675821   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:51.160519   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-568876 stop: (3m2.962114887s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-568876 status: exit status 7 (77.188144ms)

                                                
                                                
-- stdout --
	multinode-568876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-568876-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr: exit status 7 (80.787628ms)

                                                
                                                
-- stdout --
	multinode-568876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-568876-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:55:30.894107   48495 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:55:30.894219   48495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:55:30.894228   48495 out.go:358] Setting ErrFile to fd 2...
	I0912 22:55:30.894232   48495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:55:30.894393   48495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 22:55:30.894532   48495 out.go:352] Setting JSON to false
	I0912 22:55:30.894557   48495 mustload.go:65] Loading cluster: multinode-568876
	I0912 22:55:30.894598   48495 notify.go:220] Checking for updates...
	I0912 22:55:30.895041   48495 config.go:182] Loaded profile config "multinode-568876": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:55:30.895063   48495 status.go:255] checking status of multinode-568876 ...
	I0912 22:55:30.895477   48495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:55:30.895518   48495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:55:30.915155   48495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0912 22:55:30.915541   48495 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:55:30.915995   48495 main.go:141] libmachine: Using API Version  1
	I0912 22:55:30.916013   48495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:55:30.916348   48495 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:55:30.916546   48495 main.go:141] libmachine: (multinode-568876) Calling .GetState
	I0912 22:55:30.918015   48495 status.go:330] multinode-568876 host status = "Stopped" (err=<nil>)
	I0912 22:55:30.918029   48495 status.go:343] host is not running, skipping remaining checks
	I0912 22:55:30.918036   48495 status.go:257] multinode-568876 status: &{Name:multinode-568876 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:55:30.918069   48495 status.go:255] checking status of multinode-568876-m02 ...
	I0912 22:55:30.918335   48495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 22:55:30.918372   48495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:55:30.932583   48495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0912 22:55:30.932917   48495 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:55:30.933311   48495 main.go:141] libmachine: Using API Version  1
	I0912 22:55:30.933337   48495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:55:30.933586   48495 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:55:30.933806   48495 main.go:141] libmachine: (multinode-568876-m02) Calling .GetState
	I0912 22:55:30.935119   48495 status.go:330] multinode-568876-m02 host status = "Stopped" (err=<nil>)
	I0912 22:55:30.935131   48495 status.go:343] host is not running, skipping remaining checks
	I0912 22:55:30.935138   48495 status.go:257] multinode-568876-m02 status: &{Name:multinode-568876-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-568876 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-568876 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m45.622317943s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-568876 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-568876
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-568876-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-568876-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (62.036154ms)

                                                
                                                
-- stdout --
	* [multinode-568876-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-568876-m02' is duplicated with machine name 'multinode-568876-m02' in profile 'multinode-568876'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-568876-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-568876-m03 --driver=kvm2  --container-runtime=containerd: (42.879932799s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-568876
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-568876: exit status 80 (202.26867ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-568876 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-568876-m03 already exists in multinode-568876-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-568876-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.15s)

                                                
                                    
x
+
TestPreload (284.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-991761 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0912 22:59:35.673236   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:51.159549   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-991761 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m1.456092975s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-991761 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-991761 image pull gcr.io/k8s-minikube/busybox: (4.119105047s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-991761
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-991761: (1m31.428011764s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-991761 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-991761 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m6.229423559s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-991761 image list
helpers_test.go:175: Cleaning up "test-preload-991761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-991761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-991761: (1.062240462s)
--- PASS: TestPreload (284.51s)

                                                
                                    
x
+
TestScheduledStopUnix (119.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-656311 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-656311 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.178054233s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-656311 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-656311 -n scheduled-stop-656311
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-656311 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-656311 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-656311 -n scheduled-stop-656311
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-656311
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-656311 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0912 23:04:34.234274   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:35.673165   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-656311
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-656311: exit status 7 (60.077505ms)

                                                
                                                
-- stdout --
	scheduled-stop-656311
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-656311 -n scheduled-stop-656311
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-656311 -n scheduled-stop-656311: exit status 7 (63.735234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-656311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-656311
--- PASS: TestScheduledStopUnix (119.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3186262871 start -p running-upgrade-702317 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3186262871 start -p running-upgrade-702317 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m32.831118804s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-702317 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0912 23:09:35.673213   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-702317 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m43.271637471s)
helpers_test.go:175: Cleaning up "running-upgrade-702317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-702317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-702317: (1.166485411s)
--- PASS: TestRunningBinaryUpgrade (200.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.029463982s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-824865
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-824865: (1.429970892s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-824865 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-824865 status --format={{.Host}}: exit status 7 (63.21279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.999790963s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-824865 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (77.178028ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-824865] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-824865
	    minikube start -p kubernetes-upgrade-824865 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8248652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-824865 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0912 23:09:51.158743   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-824865 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.867889936s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-824865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-824865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-824865: (1.359905902s)
--- PASS: TestKubernetesUpgrade (208.88s)

                                                
                                    
x
+
TestPause/serial/Start (84.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405808 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-405808 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m24.638118695s)
--- PASS: TestPause/serial/Start (84.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (75.209105ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-428535] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-428535 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-428535 --driver=kvm2  --container-runtime=containerd: (1m37.796749121s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-428535 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-786216 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-786216 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (101.315814ms)

                                                
                                                
-- stdout --
	* [false-786216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 23:04:49.994936   53184 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:04:49.995023   53184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:04:49.995030   53184 out.go:358] Setting ErrFile to fd 2...
	I0912 23:04:49.995034   53184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:04:49.995187   53184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5898/.minikube/bin
	I0912 23:04:49.995703   53184 out.go:352] Setting JSON to false
	I0912 23:04:49.996524   53184 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6431,"bootTime":1726175859,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:04:49.996581   53184 start.go:139] virtualization: kvm guest
	I0912 23:04:49.998676   53184 out.go:177] * [false-786216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:04:49.999963   53184 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:04:49.999959   53184 notify.go:220] Checking for updates...
	I0912 23:04:50.001403   53184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:04:50.002931   53184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5898/kubeconfig
	I0912 23:04:50.004189   53184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5898/.minikube
	I0912 23:04:50.005541   53184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:04:50.006944   53184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:04:50.008528   53184 config.go:182] Loaded profile config "NoKubernetes-428535": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:04:50.008698   53184 config.go:182] Loaded profile config "offline-containerd-312938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:04:50.008821   53184 config.go:182] Loaded profile config "pause-405808": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:04:50.008926   53184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:04:50.043409   53184 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 23:04:50.044803   53184 start.go:297] selected driver: kvm2
	I0912 23:04:50.044819   53184 start.go:901] validating driver "kvm2" against <nil>
	I0912 23:04:50.044831   53184 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:04:50.046855   53184 out.go:201] 
	W0912 23:04:50.048125   53184 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0912 23:04:50.049385   53184 out.go:201] 

                                                
                                                
** /stderr **
E0912 23:04:51.159538   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-786216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-786216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-786216"

                                                
                                                
----------------------- debugLogs end: false-786216 [took: 2.571611751s] --------------------------------
helpers_test.go:175: Cleaning up "false-786216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-786216
--- PASS: TestNetworkPlugins/group/false (2.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (88.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-405808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.48514197s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (88.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (54.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (52.948398749s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-428535 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-428535 status -o json: exit status 2 (260.344213ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-428535","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-428535
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-428535: (1.478472501s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (54.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (199.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.879719836 start -p stopped-upgrade-459653 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.879719836 start -p stopped-upgrade-459653 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m59.101217691s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.879719836 -p stopped-upgrade-459653 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.879719836 -p stopped-upgrade-459653 stop: (1.478616332s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-459653 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-459653 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m18.503507899s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (199.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-428535 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (48.844064653s)
--- PASS: TestNoKubernetes/serial/Start (48.84s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-405808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-405808 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-405808 --output=json --layout=cluster: exit status 2 (244.015709ms)

                                                
                                                
-- stdout --
	{"Name":"pause-405808","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-405808","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-405808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-405808 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-405808 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-405808 --alsologtostderr -v=5: (1.163916881s)
--- PASS: TestPause/serial/DeletePaused (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-428535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-428535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.864751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-428535
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-428535: (1.62329024s)
--- PASS: TestNoKubernetes/serial/Stop (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-428535 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-428535 --driver=kvm2  --container-runtime=containerd: (1m1.168777893s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (61.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-428535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-428535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.959023ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-459653
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (185.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-474897 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-474897 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m5.855990957s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (185.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-913479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-913479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m17.63007911s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (127.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-392218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-392218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (2m7.889679949s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (127.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-913479 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec79e3f5-7cfb-4746-9459-4939187d3741] Pending
helpers_test.go:344: "busybox" [ec79e3f5-7cfb-4746-9459-4939187d3741] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec79e3f5-7cfb-4746-9459-4939187d3741] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.005268872s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-913479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-913479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-913479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-913479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-913479 --alsologtostderr -v=3: (1m31.758390897s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-392218 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f0ed0611-880c-44d6-a2cf-eb29cf6650df] Pending
helpers_test.go:344: "busybox" [f0ed0611-880c-44d6-a2cf-eb29cf6650df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f0ed0611-880c-44d6-a2cf-eb29cf6650df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003948618s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-392218 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-392218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-392218 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-392218 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-392218 --alsologtostderr -v=3: (1m32.173832398s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-474897 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7552c73c-706d-44b1-9c78-eb39819a6d8c] Pending
helpers_test.go:344: "busybox" [7552c73c-706d-44b1-9c78-eb39819a6d8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7552c73c-706d-44b1-9c78-eb39819a6d8c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003280276s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-474897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-913479 -n embed-certs-913479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-913479 -n embed-certs-913479: exit status 7 (63.772952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-913479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (296.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-913479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-913479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m56.091198338s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-913479 -n embed-certs-913479
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (296.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-474897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-474897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-474897 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-474897 --alsologtostderr -v=3: (1m32.032829731s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-074475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
E0912 23:14:51.158797   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-074475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m31.646745397s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392218 -n no-preload-392218
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392218 -n no-preload-392218: exit status 7 (63.015131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-392218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (324.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-392218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-392218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m23.985727648s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392218 -n no-preload-392218
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (324.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-474897 -n old-k8s-version-474897
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-474897 -n old-k8s-version-474897: exit status 7 (82.467872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-474897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (163.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-474897 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-474897 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m42.765831359s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-474897 -n old-k8s-version-474897
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (163.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-074475 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0cb08ef8-3cdd-4850-a80b-5069802af3af] Pending
helpers_test.go:344: "busybox" [0cb08ef8-3cdd-4850-a80b-5069802af3af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0cb08ef8-3cdd-4850-a80b-5069802af3af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004777677s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-074475 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-074475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-074475 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-074475 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-074475 --alsologtostderr -v=3: (1m31.745975033s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475: exit status 7 (65.161968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-074475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-074475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-074475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m14.065833547s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vbf7z" [3fead8e3-d0ac-444c-95b0-8ee56ed151ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004631531s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vbf7z" [3fead8e3-d0ac-444c-95b0-8ee56ed151ce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005439113s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-474897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-474897 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-474897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-474897 -n old-k8s-version-474897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-474897 -n old-k8s-version-474897: exit status 2 (270.410772ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-474897 -n old-k8s-version-474897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-474897 -n old-k8s-version-474897: exit status 2 (266.333845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-474897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-474897 -n old-k8s-version-474897
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-474897 -n old-k8s-version-474897
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-988172 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-988172 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (48.742045113s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hbn22" [fe0f4d0b-e9f8-4a9b-801a-eb94ba967f3f] Running
E0912 23:18:44.395074   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.401476   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.412824   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.434205   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.475595   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.557047   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:44.718528   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:45.039807   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:45.681296   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:46.963101   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:18:49.524820   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004455613s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hbn22" [fe0f4d0b-e9f8-4a9b-801a-eb94ba967f3f] Running
E0912 23:18:54.646282   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00538466s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-913479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-913479 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-913479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-913479 -n embed-certs-913479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-913479 -n embed-certs-913479: exit status 2 (252.397518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-913479 -n embed-certs-913479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-913479 -n embed-certs-913479: exit status 2 (241.935034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-913479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-913479 --alsologtostderr -v=1: (1.251385342s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-913479 -n embed-certs-913479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-913479 -n embed-certs-913479
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m26.747060131s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-988172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-988172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040041239s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (91.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-988172 --alsologtostderr -v=3
E0912 23:19:25.370415   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:19:35.673372   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:19:51.159080   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:20:06.332260   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-988172 --alsologtostderr -v=3: (1m31.727343473s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (91.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n8fsc" [5e65bffb-40d5-4ba8-8284-669ff8bb03be] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005058981s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n8fsc" [5e65bffb-40d5-4ba8-8284-669ff8bb03be] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003573424s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-392218 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-392218 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-392218 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392218 -n no-preload-392218
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392218 -n no-preload-392218: exit status 2 (230.978994ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392218 -n no-preload-392218
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392218 -n no-preload-392218: exit status 2 (245.162634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-392218 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392218 -n no-preload-392218
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392218 -n no-preload-392218
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sx5qf" [2b288b71-57cd-4ed1-9567-5ca656a78167] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sx5qf" [2b288b71-57cd-4ed1-9567-5ca656a78167] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004001225s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m3.005935164s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-988172 -n newest-cni-988172
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-988172 -n newest-cni-988172: exit status 7 (67.799513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-988172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-988172 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-988172 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.1: (48.305348014s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-988172 -n newest-cni-988172
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0912 23:21:14.236508   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/addons-715398/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:21:28.254336   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m38.140877564s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-988172 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-988172 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-988172 -n newest-cni-988172
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-988172 -n newest-cni-988172: exit status 2 (239.302437ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-988172 -n newest-cni-988172
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-988172 -n newest-cni-988172: exit status 2 (246.185084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-988172 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-988172 -n newest-cni-988172
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-988172 -n newest-cni-988172
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-srrkn" [b036293c-7e00-49f8-9b8c-588fb43169e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004677317s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m21.726723597s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jw92z" [e98488f7-e4bf-43b0-864b-1769bf151bfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jw92z" [e98488f7-e4bf-43b0-864b-1769bf151bfe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005653997s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m35.737701586s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rqnjb" [df69e797-931a-4ff4-8adc-eb2564ec2770] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005151448s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bhd7s" [e50a0e6d-4c00-4652-9494-aceb30d074cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bhd7s" [e50a0e6d-4c00-4652-9494-aceb30d074cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006186183s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gmz54" [9a491dc2-a448-4994-be5f-e24d3147c78f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gmz54" [9a491dc2-a448-4994-be5f-e24d3147c78f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003945954s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xjb84" [bcf56ab7-3501-4142-8bd6-d736688633b1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xjb84" [bcf56ab7-3501-4142-8bd6-d736688633b1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.00450319s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-786216 exec deployment/netcat -- nslookup kubernetes.default
E0912 23:23:13.156462   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:13.162830   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:13.174651   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:13.196269   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:13.237646   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:13.318928   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0912 23:23:13.480463   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xjb84" [bcf56ab7-3501-4142-8bd6-d736688633b1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004729239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-074475 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0912 23:23:18.288117   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m13.561992285s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-074475 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-074475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475: exit status 2 (249.337119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475: exit status 2 (242.251975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-074475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
E0912 23:23:23.410326   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074475 -n default-k8s-diff-port-074475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)
E0912 23:23:33.651682   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:44.395049   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/old-k8s-version-474897/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-786216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m45.676853093s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-284k5" [87aa9840-8a60-40f3-92e2-2dd386b547d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:23:54.132950   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-284k5" [87aa9840-8a60-40f3-92e2-2dd386b547d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.111860549s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6q5k7" [a96f0a84-a567-4305-abfa-b1891a62edf4] Running
E0912 23:24:35.095277   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/no-preload-392218/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:35.672824   13168 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5898/.minikube/profiles/functional-279627/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004119956s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gfgnh" [9ad64a78-2055-41e7-bea0-993e4a28f537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gfgnh" [9ad64a78-2055-41e7-bea0-993e4a28f537] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004249394s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-786216 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-786216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2djn9" [6db936de-b9e8-456f-9452-1e73120dd883] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2djn9" [6db936de-b9e8-456f-9452-1e73120dd883] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00357914s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-786216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-786216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (36/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.15
263 TestNetworkPlugins/group/kubenet 2.74
272 TestNetworkPlugins/group/cilium 2.98
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-450669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-450669
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-786216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-786216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-786216"

                                                
                                                
----------------------- debugLogs end: kubenet-786216 [took: 2.600597561s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-786216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-786216
--- SKIP: TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-786216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-786216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-786216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-786216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786216"

                                                
                                                
----------------------- debugLogs end: cilium-786216 [took: 2.846232718s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-786216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-786216
--- SKIP: TestNetworkPlugins/group/cilium (2.98s)

                                                
                                    
Copied to clipboard